path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/Workflow.ipynb | ###Markdown
Workflow
###Code
%run relativepath.py
from opencanada.statscan import StatscanZip
%run displayoptions.py
%run prices_workflow.py
###Output
_____no_output_____ |
materiales_repaso/08-Python_06-Loops_usando_For.ipynb | ###Markdown
Bucles o loops usando For A menudo, el programa necesita repetir un bloque varias veces. Ahí es donde los loops son útiles. Python utiliza una estructura con la palabra ``for`` para realizar iteración sobre cualquier secuencia. Por ejemplo, cualquier cadena en Python es una secuencia de sus caracteres, por lo que podemos iterar sobre ellos usando ``for`` :
###Code
for character in 'hello':
print(character)
###Output
_____no_output_____
###Markdown
Otro caso de uso de ``for`` es iterar alguna variable entera en orden creciente o decreciente. Dicha secuencia de números enteros se puede crear usando la función ``range(min_value, max_value)``:
###Code
for i in range(5, 8):
print(i, i ** 2)
print('end of loop')
# 5 25
# 6 36
# 7 49
# fin del ciclo
###Output
_____no_output_____
###Markdown
La función ``range(min_value, max_value)`` genera una secuencia con números min_value , min_value + 1 , ..., max_value - 1 . El último número no está incluido.Hay una forma reducida ``range(max_value)``, en cuyo caso min_value se establece implícitamente en cero:
###Code
for i in range(3):
print(i)
# 0
# 1
# 2
###Output
_____no_output_____
###Markdown
De esta manera, podemos repetir algunas acciones varias veces:
###Code
for i in range(2 ** 2):
print('Hello, world!')
###Output
_____no_output_____
###Markdown
Igual que con ``if-else``, la sangría es lo que especifica qué instrucciones están controladas por ``for`` y cuáles no.``range()`` puede definir una secuencia vacía, como ``range(-5)`` o ``range(7, 3)`` . En este caso, el bloque ``for`` no se ejecutará:
###Code
for i in range(-5):
print('Hello, world!')
###Output
_____no_output_____
###Markdown
Tengamos un ejemplo más complejo y sumemos los enteros de 1 a n inclusive.
###Code
result = 0
n = 5
for i in range(1, n + 1):
result += i
# esto ^^ es la abreviatura de
# resultado = resultado + i
print(result)
###Output
_____no_output_____
###Markdown
Preste atención a que el valor máximo en ``range()`` es n + 1 para hacer i igual a n en el último paso.Para iterar sobre una secuencia decreciente, podemos usar una forma extendida de ``range()`` con tres argumentos: ``range(start_value, end_value, step)`` . Cuando se omite, el paso es implícitamente igual a 1. Sin embargo, puede ser cualquier valor distinto de cero. El ciclo siempre incluye start_value y excluye end_value durante la iteración:
###Code
for i in range(10, 0, -2):
print(i)
# 10
# 8
# 6
# 4
# 2
###Output
_____no_output_____
###Markdown
EJERCICIOS 1- Dados dos enteros A y B (A ≤ B). Imprima todos los números de A hasta B inclusive.
###Code
A, B = int(input("A")), int(input("B"))
assert A <= B
for i in range(A, B+1):
print(i)
###Output
A 1
B 7
###Markdown
2- Escribir un programa que pida al usuario una palabra y la muestre por pantalla 10 veces.
###Code
palabra = input()
for _ in range(10):
print(palabra)
###Output
hola
###Markdown
3- Para un entero N dado, calcule la suma de sus cubos:1^3 + 2^3 +… + N^3
###Code
n = int(input("N"))
x = 0
for i in range(1, n+1):
x = x + i ** 3
print(x)
###Output
N 3
###Markdown
4- Escribir un programa que pida al usuario un número entero positivo y muestre por pantalla todos los números impares desde 1 hasta ese número separados por comas.
###Code
n = int(input("n"))
assert n >= 0
x = ""
for i in range(1, n+1, 2):
x = f"{x},{i}" if i > 1 else str(i)
print(x)
###Output
n 5
###Markdown
5- Escribir un programa que almacene los vectores (1,2,3) y (-1,0,2) en dos listas y muestre por pantalla su producto escalar.
###Code
a = [1,2,3]
b = [-1, 0, 2]
c = [a[i]*b[i] for i in range(3)]
print(c)
###Output
[-1, 0, 6]
###Markdown
Bucles o loops usando For A menudo, el programa necesita repetir un bloque varias veces. Ahí es donde los loops son útiles. Python utiliza una estructura con la palabra ``for`` para realizar iteración sobre cualquier secuencia. Por ejemplo, cualquier cadena en Python es una secuencia de sus caracteres, por lo que podemos iterar sobre ellos usando ``for`` :
###Code
for character in 'hello':
print(character)
###Output
_____no_output_____
###Markdown
Otro caso de uso de ``for`` es iterar alguna variable entera en orden creciente o decreciente. Dicha secuencia de números enteros se puede crear usando la función ``range(min_value, max_value)``:
###Code
for i in range(5, 8):
print(i, i ** 2)
print('end of loop')
# 5 25
# 6 36
# 7 49
# fin del ciclo
###Output
_____no_output_____
###Markdown
La función ``range(min_value, max_value)`` genera una secuencia con números min_value , min_value + 1 , ..., max_value - 1 . El último número no está incluido.Hay una forma reducida ``range(max_value)``, en cuyo caso min_value se establece implícitamente en cero:
###Code
for i in range(3):
print(i)
# 0
# 1
# 2
###Output
_____no_output_____
###Markdown
De esta manera, podemos repetir algunas acciones varias veces:
###Code
for i in range(2 ** 2):
print('Hello, world!')
###Output
_____no_output_____
###Markdown
Igual que con ``if-else``, la sangría es lo que especifica qué instrucciones están controladas por ``for`` y cuáles no.``range()`` puede definir una secuencia vacía, como ``range(-5)`` o ``range(7, 3)`` . En este caso, el bloque ``for`` no se ejecutará:
###Code
for i in range(-5):
print('Hello, world!')
###Output
_____no_output_____
###Markdown
Tengamos un ejemplo más complejo y sumemos los enteros de 1 a n inclusive.
###Code
result = 0
n = 5
for i in range(1, n + 1):
result += i
# esto ^^ es la abreviatura de
# resultado = resultado + i
print(result)
###Output
_____no_output_____
###Markdown
Preste atención a que el valor máximo en ``range()`` es n + 1 para hacer i igual a n en el último paso.Para iterar sobre una secuencia decreciente, podemos usar una forma extendida de ``range()`` con tres argumentos: ``range(start_value, end_value, step)`` . Cuando se omite, el paso es implícitamente igual a 1. Sin embargo, puede ser cualquier valor distinto de cero. El ciclo siempre incluye start_value y excluye end_value durante la iteración:
###Code
for i in range(10, 0, -2):
print(i)
# 10
# 8
# 6
# 4
# 2
###Output
_____no_output_____ |
Machine Learning/ML0101EN-RecSys-Content-Based-movies-py-v1.ipynb | ###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-06-21 10:03:13-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 24.6MB/s in 6.6s
2020-06-21 10:03:20 (23.2 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents- Acquiring the Data- Preprocessing- Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`**. To download the data, we will use `!wget` to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-06-16 21:58:06-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 26.0MB/s in 6.4s
2020-06-16 21:58:12 (24.1 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movies's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movies' title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____ |
Housing-Prices/first_submission-Machine-Learning-Competition-Practice-Iowa-Home-Prices.ipynb | ###Markdown
ObjectivePredict the sales price for each house. For each Id in the test set, you must predict the value of the SalePrice variable. Competition Link: https://www.kaggle.com/c/home-data-for-ml-course/overview/description
###Code
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
home_data = pd.read_csv('data/train.csv')
# Create Target and Features
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae))
# Using best value for max_leaf_nodes
iowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE for best value of max_leaf_nodes: {:,.0f}".format(val_mae))
# Define the model. Set random_state to 1
rf_model = RandomForestRegressor(random_state=1)
rf_model.fit(train_X, train_y)
rf_val_predictions = rf_model.predict(val_X)
rf_val_mae = mean_absolute_error(rf_val_predictions, val_y)
print("Validation MAE for Random Forest Model: {:,.0f}".format(rf_val_mae))
# To improve accuracy, create a new Random Forest model which you will train on all training data
rf_model_on_full_data = RandomForestRegressor(random_state=1)
# fit rf_model_on_full_data on all data from the training data
rf_model_on_full_data.fit(train_X, train_y)
# In previous code cell
rf_model_on_full_data = RandomForestRegressor()
rf_model_on_full_data.fit(X, y)
# Then in last code cell
test_data_path = 'data/test.csv'
test_data = pd.read_csv(test_data_path)
# create test_X which comes from test_data but includes only the columns you used for prediction
test_X = test_data[features]
test_preds = rf_model_on_full_data.predict(test_X)
output = pd.DataFrame({'Id': test_data.Id,
'SalePrice': test_preds})
output.to_csv('submission.csv', index=False)
output
###Output
/opt/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
|
Interactive_Infer_example.ipynb | ###Markdown
Interactive Inference Example: Text to Speech to TextThis example shows how to set up interactive inference to demo OpenSeq2Seq models. This example will convert text to spoken English via a Text2Speech model and then back to English text via a Speech2Text model.Requirements:* checkpoints for both model* configs for both modelsSteps:1. Put the Text2Speech checkpoint and config inside a new directory 1. For this example, it is assumed to be inside the Infer_T2S subdirectory2. Put the Speech2Text checkpoint and config inside a new directory 1. For this example, it is assumed to be inside the Infer_S2T subdirectory3. Run jupyter notebook and run all cells
###Code
import IPython
import librosa
import numpy as np
import scipy.io.wavfile as wave
import tensorflow as tf
from open_seq2seq.utils.utils import deco_print, get_base_config, check_logdir,\
create_logdir, create_model, get_interactive_infer_results
from open_seq2seq.models.text2speech import save_audio
# Define the command line arguments that one would pass to run.py here
args_S2T = ["--config_file=Infer_S2T/config.py",
"--mode=interactive_infer",
"--logdir=Infer_S2T/",
"--batch_size_per_gpu=1",
]
args_T2S = ["--config_file=Infer_T2S/config.py",
"--mode=interactive_infer",
"--logdir=Infer_T2S/",
"--batch_size_per_gpu=1",
]
# A simpler version of what run.py does. It returns the created model and its saved checkpoint
def get_model(args, scope):
with tf.variable_scope(scope):
args, base_config, base_model, config_module = get_base_config(args)
checkpoint = check_logdir(args, base_config)
model = create_model(args, base_config, config_module, base_model, None)
return model, checkpoint
model_S2T, checkpoint_S2T = get_model(args_S2T, "S2T")
model_T2S, checkpoint_T2S = get_model(args_T2S, "T2S")
# Create the session and load the checkpoints
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
sess = tf.InteractiveSession(config=sess_config)
vars_S2T = {}
vars_T2S = {}
for v in tf.get_collection(tf.GraphKeys.VARIABLES):
if "S2T" in v.name:
vars_S2T["/".join(v.op.name.split("/")[1:])] = v
if "T2S" in v.name:
vars_T2S["/".join(v.op.name.split("/")[1:])] = v
saver_S2T = tf.train.Saver(vars_S2T)
saver_T2S = tf.train.Saver(vars_T2S)
saver_S2T.restore(sess, checkpoint_S2T)
saver_T2S.restore(sess, checkpoint_T2S)
# line = "I was trained using Nvidia's Open Sequence to Sequence framework."
# Define the inference function
def infer(line):
print("Input English")
print(line)
# Generate speech
model_in = line.encode("utf-8")
results = get_interactive_infer_results(model_T2S, sess, model_in=model_in)
prediction = results[1][1][0]
audio_length = results[1][4][0]
prediction = prediction[:audio_length-1,:]
prediction = model_T2S.get_data_layer().get_magnitude_spec(prediction)
wav = save_audio(prediction, "unused", "unused", sampling_rate=22050, save_format="np.array", n_fft=1024)
audio = IPython.display.Audio(wav, rate=22050)
wav = librosa.core.resample(wav, 22050, 16000)
print("Generated Audio")
IPython.display.display(audio)
# Recognize speech
model_in = wav
results = get_interactive_infer_results(model_S2T, sess, model_in=model_in)
english_recognized = results[0][0]
print("Recognized Speech")
print(english_recognized)
while True:
line = input()
IPython.display.clear_output()
line = line.decode("utf-8")
infer(line)
###Output
_____no_output_____
###Markdown
Interactive Inference Example: Text to Speech to TextThis example shows how to set up interactive inference to demo OpenSeq2Seq models. This example will convert text to spoken English via a Text2Speech model and then back to English text via a Speech2Text model.Requirements:* checkpoints for both model* configs for both modelsSteps:1. Put the Text2Speech checkpoint and config inside a new directory 1. For this example, it is assumed to be inside the Infer_T2S subdirectory2. Put the Speech2Text checkpoint and config inside a new directory 1. For this example, it is assumed to be inside the Infer_S2T subdirectory3. Run jupyter notebook and run all cells
###Code
import IPython
import librosa
import numpy as np
import scipy.io.wavfile as wave
import tensorflow as tf
from open_seq2seq.utils.utils import deco_print, get_base_config, check_logdir,\
create_logdir, create_model, get_interactive_infer_results
from open_seq2seq.models.text2speech import save_audio
# Define the command line arguments that one would pass to run.py here
args_S2T = ["--config_file=Infer_S2T/config.py",
"--mode=interactive_infer",
"--logdir=Infer_S2T/",
"--batch_size_per_gpu=1",
]
args_T2S = ["--config_file=Infer_T2S/config.py",
"--mode=interactive_infer",
"--logdir=Infer_T2S/",
"--batch_size_per_gpu=1",
]
# A simpler version of what run.py does. It returns the created model and its saved checkpoint
def get_model(args, scope):
with tf.variable_scope(scope):
args, base_config, base_model, config_module = get_base_config(args)
checkpoint = check_logdir(args, base_config)
model = create_model(args, base_config, config_module, base_model, None)
return model, checkpoint
model_S2T, checkpoint_S2T = get_model(args_S2T, "S2T")
model_T2S, checkpoint_T2S = get_model(args_T2S, "T2S")
# Create the session and load the checkpoints
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
sess = tf.InteractiveSession(config=sess_config)
vars_S2T = {}
vars_T2S = {}
for v in tf.get_collection(tf.GraphKeys.VARIABLES):
if "S2T" in v.name:
vars_S2T["/".join(v.op.name.split("/")[1:])] = v
if "T2S" in v.name:
vars_T2S["/".join(v.op.name.split("/")[1:])] = v
saver_S2T = tf.train.Saver(vars_S2T)
saver_T2S = tf.train.Saver(vars_T2S)
saver_S2T.restore(sess, checkpoint_S2T)
saver_T2S.restore(sess, checkpoint_T2S)
# line = "I was trained using Nvidia's Open Sequence to Sequence framework."
# Define the inference function
n_fft = model_T2S.get_data_layer().n_fft
sampling_rate = model_T2S.get_data_layer().sampling_rate
def infer(line):
print("Input English")
print(line)
# Generate speech
results = get_interactive_infer_results(model_T2S, sess, model_in=[line])
prediction = results[1][1][0]
audio_length = results[1][4][0]
prediction = prediction[:audio_length-1,:]
prediction = model_T2S.get_data_layer().get_magnitude_spec(prediction, is_mel=True)
wav = save_audio(prediction, "unused", "unused", sampling_rate=sampling_rate, save_format="np.array", n_fft=n_fft)
audio = IPython.display.Audio(wav, rate=sampling_rate)
wav = librosa.core.resample(wav, sampling_rate, 16000)
print("Generated Audio")
IPython.display.display(audio)
if model_T2S.get_data_layer()._both:
mag_prediction = results[1][5][0]
mag_prediction = mag_prediction[:audio_length-1,:]
mag_prediction = model_T2S.get_data_layer().get_magnitude_spec(mag_prediction)
wav = save_audio(mag_prediction, "unused", "unused", sampling_rate=sampling_rate, save_format="np.array", n_fft=n_fft)
audio = IPython.display.Audio(wav, rate=sampling_rate)
wav = librosa.core.resample(wav, sampling_rate, 16000)
print("Generated Audio from magnitude spec")
IPython.display.display(audio)
# Recognize speech
model_in = wav
results = get_interactive_infer_results(model_S2T, sess, model_in=[model_in])
english_recognized = results[0][0]
print("Recognized Speech")
print(english_recognized)
while True:
line = input()
if line == "":
break
IPython.display.clear_output()
infer(line)
###Output
_____no_output_____
###Markdown
Interactive Inference Example: Text to Speech to TextThis example shows how to set up interactive inference to demo OpenSeq2Seq models. This example will convert text to spoken English via a Text2Speech model and then back to English text via a Speech2Text model.Requirements:* checkpoints for both model* configs for both modelsSteps:1. Put the Text2Speech checkpoint and config inside a new directory 1. For this example, it is assumed to be inside the Infer_T2S subdirectory2. Put the Speech2Text checkpoint and config inside a new directory 1. For this example, it is assumed to be inside the Infer_S2T subdirectory3. Run jupyter notebook and run all cells
###Code
import IPython
import librosa
import numpy as np
import scipy.io.wavfile as wave
import tensorflow as tf
from open_seq2seq.utils.utils import deco_print, get_base_config, check_logdir,\
create_logdir, create_model, get_interactive_infer_results
from open_seq2seq.models.text2speech import save_audio
# Define the command line arguments that one would pass to run.py here
args_S2T = ["--config_file=Infer_S2T/config.py",
"--mode=interactive_infer",
"--logdir=Infer_S2T/",
"--batch_size_per_gpu=1",
]
args_T2S = ["--config_file=Infer_T2S/config.py",
"--mode=interactive_infer",
"--logdir=Infer_T2S/",
"--batch_size_per_gpu=1",
]
# A simpler version of what run.py does. It returns the created model and its saved checkpoint
def get_model(args, scope):
with tf.variable_scope(scope):
args, base_config, base_model, config_module = get_base_config(args)
checkpoint = check_logdir(args, base_config)
model = create_model(args, base_config, config_module, base_model, None)
return model, checkpoint
model_S2T, checkpoint_S2T = get_model(args_S2T, "S2T")
model_T2S, checkpoint_T2S = get_model(args_T2S, "T2S")
# Create the session and load the checkpoints
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
sess = tf.InteractiveSession(config=sess_config)
vars_S2T = {}
vars_T2S = {}
for v in tf.get_collection(tf.GraphKeys.VARIABLES):
if "S2T" in v.name:
vars_S2T["/".join(v.op.name.split("/")[1:])] = v
if "T2S" in v.name:
vars_T2S["/".join(v.op.name.split("/")[1:])] = v
saver_S2T = tf.train.Saver(vars_S2T)
saver_T2S = tf.train.Saver(vars_T2S)
saver_S2T.restore(sess, checkpoint_S2T)
saver_T2S.restore(sess, checkpoint_T2S)
# line = "I was trained using Nvidia's Open Sequence to Sequence framework."
# Define the inference function
def infer(line):
print("Input English")
print(line)
# Generate speech
model_in = line.encode("utf-8")
results = get_interactive_infer_results(model_T2S, sess, model_in=model_in)
prediction = results[1][1][0]
audio_length = results[1][4][0]
prediction = prediction[:audio_length-1,:]
prediction = model_T2S.get_data_layer().get_magnitude_spec(prediction)
wav = save_audio(prediction, "unused", "unused", save_format="np.array")
audio = IPython.display.Audio(wav, rate=22050)
wav = librosa.core.resample(wav, 22050, 16000)
print("Generated Audio")
IPython.display.display(audio)
# Recognize speech
model_in = wav
results = get_interactive_infer_results(model_S2T, sess, model_in=model_in)
english_recognized = results[0][0]
print("Recognized Speech")
print(english_recognized)
while True:
line = input()
IPython.display.clear_output()
line = line.decode("utf-8")
infer(line)
###Output
Input English
Anyone can edit this and generate speech!
Generated Audio
|
Lab03.ipynb | ###Markdown
1. Linear RegressionThe following example describes the expenditure (in dollars) on recreation per month by employees at a certain company, and their corresponding monthly incomes. Calculate linear regression for the given data and plot the results.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)
%cd "/home/mona/3074 ML Lab/Datasets"
# read data
data = pd.read_csv('expenditure.csv')
# colelcting X and Y
X = data['Expenditure($)'].values
Y = data['Income($)'].values
# mean X and Y
mean_x = np.mean(X)
mean_y = np.mean(Y)
# total number of values
n = len(X)
# using formula to calculate m and c
numerator = 0
denominator = 0
for i in range(n):
numerator += (X[i] - mean_x) * (Y[i] - mean_y)
denominator += (X[i] - mean_x) ** 2
b1 = numerator/denominator # 'm' value
b0 = mean_y - (b1*mean_x) # 'c' value
#print("b0:" + b0 + "\nb1: " + b1)
print(b0)
print(b1)
# plotting values and Regression Line
max_x = np.max(X) + 100
min_x = np.min(X) - 100
# calculating line values x and y
x = np.linspace(min_x, max_x, 1000)
y = b0 + b1 * x
# plotting line
plt.plot(x, y, color='#58b970', label='Regression Line')
# plotting scatter points
plt.scatter(X, Y, c='#ef5423', label = 'Scatter Plot')
plt.xlabel('Expenditure in $')
plt.ylabel('Income in $')
plt.legend()
plt.show()
sst = 0
ssr = 0
for i in range(n):
y_pred = b0 + b1 * X[i]
sst += (Y[i] - mean_y) ** 2
ssr += (Y[i] - y_pred) ** 2
r2 = 1 - (ssr/sst)
#print("R^2: " + r2)
print(r2)
###Output
0.7063015073349272
###Markdown
2. Find-S AlgorithmImplement Find S algorithm for the following data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)
%cd "/home/mona/3074 ML Lab/Datasets"
# read data
data = pd.read_csv('mushrooms.csv')
# colelcting X and Y
concepts = np.array(data)[:,:-1]
target = np.array(data)[:,-1]
def finds_train(con, tar):
for i, val in enumerate(tar):
if val == 'yes':
specific_h = con[i].copy()
break
for i, val in enumerate(con):
if tar[i] == 'yes':
for x in range(len(specific_h)):
if val[x] != specific_h[x]:
specific_h[x] = '?'
else:
pass
return specific_h
#print("Final specific hypothesis: " + finds_train(concepts, target))
print(finds_train(concepts, target))
###Output
['?' '?' 'hard' '?' '?']
###Markdown
3. Candidate Elimination AlgorithmTest your algorithm with the following data.
###Code
import random
import csv
import numpy as np
import pandas as pd
%cd "/home/mona/3074 ML Lab/Datasets"
# read data
data = pd.read_csv('cars.csv')
# colelcting X and Y
concepts = np.array(data)[:,:-1]
target = np.array(data)[:,-1]
specific_h = concepts[0].copy()
general_h = []
n = len(data.columns) - 1
for i in range(n):
general_h.append(['?'] * n)
print(n)
print(general_h)
def can_elim(con, tar):
for i, val in enumerate(con):
if tar[i] == 'Yes':
for x in range(len(specific_h)):
if val[x] != specific_h[x]:
specific_h[x] = '?'
else:
for x in range(len(specific_h)):
if val[x] != specific_h[x]:
general_h[x][x] = specific_h[x]
print(specific_h)
print("\nFinal general hypothesis:")
print(general_h)
return None
print("Final specific hypothesis: ")
can_elim(concepts, target)
###Output
Final specific hypothesis:
['Japan' '?' '?' '?' 'Economy']
Final general hypothesis:
[['Japan', '?', '?', '?', '?'], ['?', '?', '?', '?', '?'], ['?', '?', 'Blue', '?', '?'], ['?', '?', '?', '?', '?'], ['?', '?', '?', '?', 'Economy']]
###Markdown
Spot Polynomial RegressionAssume that there is only one independent variable x. If the relationship between independent variables x and dependent or output variable y is modeled by the relation:y=a+a1*x+a2*x2+.........+an*xn for some positive integer n >1, then we have a polynomial regression. Plot your results for the following equation y=a+a1*x+a2*x2 x=3,4,5,6,7. The values of a, a1 and a2 can be assumed
###Code
# assuming a, a1, a2
a = 5
a1 = 3
a2 = 8
x = np.array([3, 4, 5, 6, 7])
y = a + a1*x + a2*np.multiply(x, x)
y
# plotting values and Regression Line
max_x = np.max(x) + 10
min_x = np.min(x) - 10
print(max_x, min_x)
# plotting line
x = np.linspace(min_x, max_x)
y = a + a1*x + a2*np.multiply(x, x)
plt.plot(x, y, color='#58b970', label='Polynomial Regression Line')
# plotting scatter points
plt.scatter(x, y, c='#ef5423', label = 'Scatter Plot')
plt.xlabel('X values')
plt.ylabel('Y values')
plt.legend()
plt.show()
np.interp(2, x, y)
# plotting values and Regression Line
max_x = np.max(x) + 2
min_x = np.min(x) - 2
print(max_x, min_x)
# plotting line
x = np.linspace(min_x, max_x, 10)
y = a + a1*x + a2*np.multiply(x, x)
plt.plot(x, y, color='#58b970', label='Polynomial Regression Line')
# plotting scatter points
plt.scatter(x, y, c='#ef5423', label = 'Scatter Plot')
#plotting new predicted value
plt.scatter(9, np.interp(9, x, y), c='blue', s=100)
plt.xlabel('X values')
plt.ylabel('Y values')
plt.legend()
plt.show()
###Output
19.0 -9.0
###Markdown
Computer Science Intensive Course - MindX LAB 3. CÁC CTDL TRONG PYTHON
###Code
# run this cell FIRST
%run test_cases_3.ipynb
###Output
_____no_output_____
###Markdown
Bài 1. Lọc TừCho một đoạn văn gồm nhiều từ và dấu câu, mỗi từ trong đoạn văn cách nhau bằng ít nhất một dấu cách. Lọc ra các từ đã xuất hiện trong đoạn văn và sắp xếp theo thứ tự bảng chữ cái (không phân biệt chữ hoa và thường). **Input**: Một đoạn văn dưới dạng *str* với độ dài < 10^6 ký tự. **Output**: Một *list* chứa các từ đã xuất hiện trong đoạn văn theo thứ tự bảng chữ cái. Trả về *list* rỗng nếu không có từ nào. **Ví dụ**:- Input: "The cat is chasing the rat. The dog is also chasing the rat."- Output: ['cat', 'chasing', 'is', 'rat', 'the']- Giải thích: Output là các từ đã xuất hiện trong đoạn văn được sắp xếp theo bảng chữ cái. Các từ 'is' và 'the' xuất hiện nhiều lần được lọc ra. **Gợi ý**: Lọc các ký tự đặc biệt
###Code
string = "The cat is chasing the rat."
''.join(c for c in string if c.isalpha())
def filter_word(inp_str):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test1(filter_word)
###Output
Testing on 9 cases.
- Test 1 PASSED.
- Test 2 PASSED.
- Test 3 PASSED.
- Test 4 PASSED.
- Test 5 PASSED.
- Test 6 PASSED.
- Test 7 PASSED.
- Test 8 PASSED.
- Test 9 PASSED.
CONGRATULATIONS! All test cases passed!
###Markdown
Bài 2. Cắt ChữCho một đoạn văn gồm nhiều từ và dấu câu. Mỗi từ trong đoạn văn cách nhau bằng ít nhất một dấu cách. Đoạn văn này được hiển thị trên màn hình có độ rộng *k* ký tự. Hãy cắt đoạn văn này thành những chuỗi nhỏ hơn sao cho:- Mỗi chuỗi là dài nhất có thể- Mỗi chuỗi không được dài hơn k ký tự.- Không từ nào bị cắt ở giữa (VD không được cắt "MindX Technology School." thành "MindX Tech" và "nology School.")- Không có dấu cách ở đầu và cuối chuỗi đã cắt (VD có thể cắt "MindX Technology School." thành "MindX Technology" và "School.")**Input**: Một đoạn văn dưới dạng *str* với độ dài < 10^6 ký tự và một số nguyên *0 < k < 50*. Không từ nào dài hơn k. **Output**: Trả về một list các chuỗi đã cắt. **Ví dụ**: - Input: "The cat is chasing the rat. The dog is also chasing the rat.", k=10- Output: ['The cat is', 'chasing', 'the rat.', 'The dog is', 'also', 'chasing', 'the rat.']
###Code
def wrap_text(inp_str, k):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test2(wrap_text)
###Output
Testing on 9 cases.
- Test 1 PASSED.
- Test 2 PASSED.
- Test 3 PASSED.
- Test 4 PASSED.
- Test 5 PASSED.
- Test 6 PASSED.
- Test 7 PASSED.
- Test 8 PASSED.
- Test 9 PASSED.
CONGRATULATIONS! All test cases passed!
###Markdown
Bài 3. Phần Tử Bất ThườngCho hai *list* giống nhau về các phần tử nhưng khác nhau về thứ tự. Các phần tử trong *list* không trùng nhau. Tuy nhiên do sự cố, một trong hai list bị dư một phần tử lạ không trùng với các phần tử khác. Hãy tìm giá trị của phần tử đó. **Input**: Hai list số nguyên có độ dài *n* và *n+1* (hoặc *n+1* và *n*) với *0 < n < 10^6*. **Output**: Một số nguyên là giá trị của phần tử bất thường. **Ví dụ**:- Input: [1, 4, 5, 7, 9], [7, 4, 1, 9]- Output: 5
###Code
def find_anomaly(list_1, list_2):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test3(find_anomaly)
###Output
Testing on 6 cases.
- Test 1 PASSED.
- Test 2 PASSED.
- Test 3 PASSED.
- Test 4 PASSED.
- Test 5 PASSED.
- Test 6 PASSED.
CONGRATULATIONS! All test cases passed!
###Markdown
Bài 4. Tổng Đường ChéoCho một ma trận với *n* dòng và *n* cột. Hãy tính tổng các giá trị trên hai đường chéo của ma trận. **Input**: Ma trận *n x n* dưới dạng *list lồng nhau*, với *0 < n < 1000*. Mỗi phần tử trong khoảng [-10^6, 10^6] **Output**: Một *tuple* bao gồm hai giá trị là tổng các giá trị trên đường chéo chính và đường chéo phụ, theo thứ tự. **Ví dụ**:- Input: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]- Output: (15, 15)- Giải thích: Tổng trên đường chéo chính là 1+5+9 = 15; trên đường chéo phụ là 3+5+7 = 15**Gợi ý**:Trong Python, ma trận có thể được lưu trữ dưới dạng *list lồng nhau*, tức các phần tử trong *list* là các *list* khác, mỗi *list* phần tử có cùng độ dài. Cách truy vấn:
###Code
matrix = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
print("Second row: {}".format(matrix[1]))
print("Element at row 0, column 1: {}".format(matrix[0][1]))
def sum_diagonals(matrix):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test4(sum_diagonals)
###Output
Testing on 7 cases.
- Test 1 PASSED.
- Test 2 PASSED.
- Test 3 PASSED.
- Test 4 PASSED.
- Test 5 PASSED.
- Test 6 PASSED.
- Test 7 PASSED.
CONGRATULATIONS! All test cases passed!
###Markdown
Problems and Questions:1) What were the most popular girl’s and boy’s names in the last decade (2009-2018)? Can you notice any trends in names?2) How many girls and how many boys were born between 1990 and 2000?3) Can you observe the trends in the number of births over all years? Are these trends different between girls and boys? Provide relevant charts.4) Comment on statistics for your name. Give as many insights as possible.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style = 'darkgrid')
###Output
_____no_output_____
###Markdown
Creating a function to merge all the .txt files into one panda data frame
###Code
# read the content of the first file
column_heads = ['name', 'gender', 'counts']
ba80 =pd.read_csv('./data/names/yob1880.txt',encoding = "ISO-8859-1", header = None, names = column_heads)
ba80['yob'] = 1880
ba80.head()
#function to read and combine all txt files. It returns one data frame and all the rows which were read.
def read_all_files(baby):
column_heads = ['name', 'gender', 'counts']
total_rows = len(baby.index)
for year in range(1881,2019):
path = './/data//names//yob' + str(year) + '.txt'
bab = pd.read_csv(path,encoding = "ISO-8859-1", header = None, names = column_heads)
bab['yob'] = year
total_rows += len(bab.index)
baby = baby.append(bab,ignore_index=False)
return baby,total_rows
#check if total number of rows from each files equals to the total number of rows in a panda data frame
baby_dt, rows = read_all_files(ba80)
print(rows)
print(len(baby_dt.index))
rows == len(baby_dt.index)
###Output
_____no_output_____
###Markdown
1. What were the most popular girl’s and boy’s names in the last decade (2009-2018)? Can you notice any trends in names?
###Code
baby_dt.head()
baby_dt.isnull().any()
baby_dt.describe()
###Output
_____no_output_____
###Markdown
All names sorted in descending order for the decade (2009-2018)
###Code
all_names = baby_dt.query("2009<=yob<=2018").groupby(['name','gender']).sum().sort_values(by='counts',ascending = False)
all_names.reset_index(inplace=True)
all_names = all_names.drop('yob', axis=1)
all_names.head()
###Output
_____no_output_____
###Markdown
a) overall most popular 10 names for boys for the entire decade
###Code
all_names.query("gender == 'M'")[:10]
###Output
_____no_output_____
###Markdown
b) Overall most popular 10 names for girls for the entire decade
###Code
all_names.query("gender == 'F'")[:10]
###Output
_____no_output_____
###Markdown
c) Analysing the trends on names; I selected the top 1500 popular names in the decade-checked if there is any relationship between the last letter and number of births-checked the trend on how the same name was used by both genders.
###Code
#extracting top 1500 unique popular names for the last decade
last_decade = baby_dt.query("2009<=yob<=2018").groupby(['yob','name','gender']).sum().sort_values(by='counts',ascending = False)
last_decade.reset_index(inplace=True)
last_decade.head()
top1500 = last_decade[:11730]
len(top1500['name'].unique())
#adding a last letter column to top1500
top1500['last_letter'] = [x.strip()[-1] for x in top1500['name']]
top1500.head()
letters = top1500.pivot_table('counts',columns = ['gender','yob'], aggfunc = sum, index = 'last_letter').fillna(0)
letters.head()
#normalizing counts
letter=letters/letters.sum()
fig,ax = plt.subplots(2,1,figsize = (30,20), sharey=True)
letter['F'].plot(kind='bar', rot=0, ax=ax[0], title='Female',width = 0.9)
letter['M'].plot(kind='bar', rot=0, ax=ax[1], title='Male',width = 0.9)
###Output
_____no_output_____
###Markdown
The plots above indicates that, names ending with letter 'a' was most popular for females while the one ending with letter 'n' was the most popular for men. Let's observe how the popularity of top 3 letters varied per year for the last decade
###Code
#trend for most popular boys name ending with letters n, r and s over the past decade
nrsb = letter.loc[['n','r','s'],'M'].T
nrsb.head()
nrsb.plot(title = 'Trending for boys letters n,r and s')
###Output
_____no_output_____
###Markdown
We notice that names ending with letter 'n' were loosing popularity from 2011 while letters 'r' and 's' were mantaining their popularity.
###Code
#trend for most popular boys name ending with letters n, r and s over the past decade
nrsg = letter.loc[['a','e','n'],'F'].T
nrsg.head()
nrsg.plot(title = 'Trending for boys letters a,e and n')
###Output
_____no_output_____
###Markdown
We notice that from 2016 parents popularized the use of letter a as the last name to their babies, while letters e and n were loosing their popularity.
###Code
#which last letter constructed many different names among the top 1500 names in a decade?
uniq_last_letter = top1500['last_letter'].unique()
un = np.array([len(top1500[top1500['last_letter'] == letter]['name'].unique()) for letter in uniq_last_letter])
dfn = pd.DataFrame({'letter':uniq_last_letter, 'Total names':un}).sort_values(by='Total names',ascending = False)
dfn.head()
###Output
_____no_output_____
###Markdown
Let's check on how the same name was used by both genders for the last decade.
###Code
females = top1500.query("gender=='F'")[['name','yob']]
females.head()
males = top1500.query("gender == 'M'")[['name','yob']]
males.head()
name_change = males.merge(females, on='name',suffixes=('_to_girl', '_to_boy'))
name_change.head()
ax=name_change[['name','yob_to_girl']].groupby('yob_to_girl').count().plot(label = 'changed to be a girl name')
name_change[['name','yob_to_boy']].groupby('yob_to_boy').count().plot(ax=ax, c = 'black', xlabel='year', ylabel='name count')
L=plt.legend()
L.get_texts()[0].set_text('Used as a girl name')
L.get_texts()[1].set_text('Used as a boy name')
###Output
_____no_output_____
###Markdown
General conclusions for question 1:1. The most overall boy's name popularity for the entire decade was NOAH.2. The most overall girl's name popularity for the entire decade was EMMA.3. The girls names ending with the letter 'a' were the mostly preffered by the parents.4. The boys names ending with the letter 'n' were the mostly liked by the parents.5. Many parents continued sharply to name their babies with names ending with letter 'a' from 20166. Parents started to reduce the use of letter 'n' as the last letter for their male babies since around 2011.7. Letters 'n' and 'a' constructed the most unique different names respectively.8. In the first half of the decade many boys names were used as girls names as well9. In the second half of the decade many girls names were used as boys names as well 2) How many girls and how many boys were born between 1990 and 2000?
###Code
baby_dt.query("1990<yob<2000 & gender == 'F'")['counts'].sum()/baby_dt.query("1990<yob<2000")['counts'].sum()
###Output
_____no_output_____
###Markdown
16103318 girls were born between 1990 and 2000.
###Code
baby_dt.query("1990<yob<2000 & gender == 'M'")['counts'].sum()/baby_dt.query("1990<yob<2000")['counts'].sum()
###Output
_____no_output_____
###Markdown
17421143 boys were born between 1990 and 2000. 3. Can you observe the trends in the number of births over all years? Are these trends different between girls and boys? Provide relevant charts.
###Code
births_year = baby_dt.pivot_table('counts', columns=['gender'], aggfunc=sum, index = 'yob')
births_year.head()
births_year['Total Births'] = births_year['F'] + births_year['M']
births_year.head()
births_year.plot(figsize=(15,7), title='Total births in a year', xlabel = 'year', ylabel = 'births')
###Output
_____no_output_____
###Markdown
General Conclusions for question 31. Before 1940 female births outweighed male births. 2. After 1940 males births outweighed girls births.3. Before 1910 there were few births.4. Between 1940 and 1980 we observe increased number of births, may be because of the increased people civilization and industrialization. Then after the rate decreases at around 2010, may be due to availability of contraceptive pills, development of science and technology, etc. 4) Comment on statistics for your name. Give as many insights as possible.
###Code
#my name, Godfrey
godfrey = baby_dt.query("name == 'Godfrey'")
godfrey.head()
godfrey['counts'].describe()
godfrey.query("counts == 71")
fig = plt.figure(figsize = (15,6))
ax=sns.lineplot(data=godfrey,x='yob',y='counts',lw=1)
plt.axvline(1918, c = 'gray', lw = 2, label = 'Most counts')
plt.title('Godfrey number of counts per year')
_=plt.xticks(rotation ='vertical')
plt.legend()
godfrey['counts'].sum()
(godfrey['counts'].sum()/baby_dt['counts'].sum()) * 1000000
###Output
_____no_output_____
###Markdown
💻IDS507 | Lab03Regression AnalysisTA: 류 회 성(Hoe Sung Ryu) Concepts | 오늘 배울 개념---- 내 데이터를 train/test dataset으로 나누기- 내 데이터로 `Logistic regression` (로지스틱 회귀) 모델 만들어보기- 생성한 모델을 이용해 새로운 데이터를 예측해보기- 내 모델이 얼마나 잘 기능하는가?- Confusion Matrix, Roc Curve & AUC 계산 📌1.드라이브 연동
###Code
from google.colab import drive # 드라이브 연동
drive.mount('/content/gdrive')
import os
os.chdir('/content/gdrive/My Drive/IDS507-00/2022_IDS507_Lab') # DataPath 설정
!pwd
###Output
/content/gdrive/My Drive/IDS507-00/2022_IDS507_Lab
###Markdown
📌2. 회귀 분석 1) 회귀(Regression)- 데이터의 값은 평균과 같은 기존의 경향으로 돌아가려는 경향- 여러 변수들 간의 상관 관계를 파악하여, 어떤 특정 변수의 값을 다른 변수들의 값을 이용하여 설명/예측하는 기법- 독립변수, 종속변수 2) 회귀 분석의 유형- 변수의 개수 및 계수의 형태에 따라 구분- 독립변수의 개수에 따라 - 단순 : 독립변수가 1개인 경우 - 다중 : 독립변수가 여러 개인 경우- 회귀계수의 형태에 따라 - 선형 : 계수를 선형 결합으로 표현할 수 있는 경우 - 비선형 : 계수를 선형 결합으로 표현할 수 없는 경우
###Code
# sample data
# 1. train
X_train = [[1],[2],[3],[4],[5]] # 독립변수의 특성이 1개 밖에 없더라도 각 값들은 리스트 또는 배열의 형태
y_train = [2.3, 3.99, 5.15, 7.89, 8.6]
# 2. test
X_test = [[6],[7]]
y_test = [10.1, 11.9]
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
reg = lr.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
3) 단순 선형 회귀- 독립변수가 1개이고 종속변수도 1개인 경우, 그들 간의 관계를 **선형적으로 파악**하는 회귀 방식- `독립변수 X`와 `종속변수 Y`의 **`관계`**를 **`Y = aX + b 형태의 1차 함수식`**으로 표현 회귀 계수 (coefficient) → y = **`a`**x+b- 독립변수가 종속변수에 끼치는 영향력의 정도로서, 직선의 기울기(slope) 절편 (intercept) → y = ax+**`b`**- 독립변수가 0일 때의 상수 값 잔차 (residual) → y = ax+b+**`Error`**- 실제 값과 회귀식의 차이에 따른 오류 값- 잔차 값이 작을수록, 구해진 회귀식이 데이터들을 더욱 잘 설명하고 있다
###Code
y_pred = reg.predict(X_test)
y_pred
# 추정된 회귀 모형의 회귀 계수 및 절편 값을 확인
# 회귀 계수는 coef_ 속성, 절편은 intercept_ 속성에 각각 값이 할당
print("회귀 계수 : ",reg.coef_)
print("절편 : ",reg.intercept_)
print(f'선형식:y= {reg.coef_[0]}X + {reg.intercept_:.4f}')
###Output
회귀 계수 : [1.65]
절편 : 0.636000000000001
선형식:y= 1.65X + 0.6360
###Markdown
4) 사이킷런으로 성능 평가 지표 확인- 회귀 분석의 평가 지표|지표|의미|대응함수||---|---|---||MAE|Mean Absolute Error, 즉 실제값과 예측값의 차이의 절대값들의 평균|metrics 모듈의 mean_absolute_error||MSE|Mean Absolute Error, 즉 실제값과 예측값의 차이의 절대값들의 평균|metrics 모듈의 mean_squared_error||RMSE|Root of MSE, 즉 MSE의 제곱근 값|math 또는 numpy 모듈의 sqrt||$R^2$|결정 계수라고 하며, 실제값의 분산 대비 예측값의 분산의 비율|metrics 모듈의 r2_score 또는 LinearRegression의 score|
###Code
# 결과분석
from sklearn.metrics import (mean_squared_error,
r2_score,
mean_absolute_error,
)
print(mean_squared_error(y_test, y_pred))
print(r2_score(y_test, y_pred))
print(mean_absolute_error(y_test, y_pred))
print(mean_absolute_error(y_test, y_pred)**(1/2))
# 분석 결과 표로 표시하기
import matplotlib.pyplot as plt
x = range(1,8)
plt.title("Linear Regression")
plt.plot(X_train+X_test,y_train+y_test,'o',color = 'blue')
plt.plot(x,reg.coef_*x+reg.intercept_,'--',color='red')
plt.plot(X_test,y_pred,'x',color = 'black')
plt.show()
###Output
_____no_output_____
###Markdown
📌3. 실제 데이터를 활용하여 로지스틱회귀 분석 1) 로지스틱 회귀란? - 선형 회귀 모형을 **`분류`** 에 적용한 기법- 데이터가 특정 레이블(클래스)에 소속될 확률을 추정 - 이 이메일이 스팸일 확률은 얼마 - 이번 시험에서 합격할 확률은 얼마- 다른 선형 회귀 모형과는 다르게, 종속변수가 수치형 (numerical)이 아니라 범주형(categorical) - 스팸메일, 정상메일 - 합격, 불합격- 특정 클래스에 대해서 추정된 확률이 50% 이상이면 해당 데이터를 그 클래스에 속하는 것으로 분류- 기본적인 로지스틱 회귀는 이항형(binomial)으로서, 종속 변수의 값의 종류는 0과 1의 두 종류 - 즉, 이 경우의 종속변수는 곧 클래스 그 자체 - 값이 0이면 음성, 1이면 양성이라고 표현- 이러한 이진 데이터에 대해서 올바른 결과를 나타내는 선형 회귀를 수행하려면 다음과 같은 성질이 필요 - 연속적인 단조 증가(monotone increasing) 함수일 것 - 함수의 결과가 [0, 1] 사이의 값- 이와 같은 성질을 만족하는 함수를 시그모이드(sigmoid) 함수$$ y = \frac{1}{1+e^{-x}} $$ 2) 분류함수의 성능지표|함수명|설명||---|---||**accuracy_score**|정확도를 계산한다.||**confusion_matrix** |오차 행렬을 도출한다.||**precision_score** |정밀도를 계산한다.||**recall_score** |재현율을 계산한다.||**f1_score** |F1 스코어를 계산한다.||**classification_report** | 정밀도, 재현율, F1 스코어를 함께 보여준다|
###Code
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
X = np.arange(-10,10,0.1)
y = 1 / (1+np.exp(-X))
plt.plot(X,y,label = 'Sigmoid')
plt.plot(X,[0.5 for _ in X],color='red',label = 'Threshold')
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
3) 당뇨병 데이터 불러오기reference:https://www.kaggle.com/saurabh00007/diabetescsv* Pregnancies: 임신 횟수* Glucose: 포도당 부하 검사 수치* BloodPressure: 혈압(mm Hg)* SkinThickness: 팔 삼두근 뒤쪽의 피하지방 측정값(mm)* Insulin: 혈청 인슐린(mu U/ml)* BMI: 체질량지수(체중(kg)/(키(m))^2)* DiabetesPedigreeFunction: 당뇨 내력 가중치 값* Age: 나이* Outcome: 클래스 결정 값(0또는 1)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score
from sklearn.metrics import f1_score, confusion_matrix, precision_recall_curve, roc_curve
from sklearn.preprocessing import StandardScaler,MinMaxScaler
from sklearn.linear_model import LogisticRegression
diabetes_data = pd.read_csv('./data/diabetes.csv') # 데이터 로드
diabetes_data.head(3)
print(diabetes_data['Outcome'].value_counts())
# 'Glucose' 피처의 분포도
plt.hist(diabetes_data['Glucose'], bins=10)
###Output
_____no_output_____
###Markdown
4) scikit-learn 패키지를 사용하여 Train / Test 셋 분리하기parameter 설명- `train_test_split(arrays, test_size, train_size, random_state, shuffle, stratify)`의 인자(parameter)```arrays : 분할시킬 데이터를 입력 (Python list, Numpy array, Pandas dataframe 등..)test_size : 테스트 데이터셋의 비율(float)이나 갯수(int) (default = 0.25)train_size : 학습 데이터셋의 비율(float)이나 갯수(int) (default = test_size의 나머지)random_state : 데이터 분할시 셔플이 이루어지는데 이를 기억하기 위한 임의의 시드값 (int나 RandomState로 입력)shuffle : 셔플여부설정 (default = True)stratify : 지정한 Data의 비율을 유지한다. 예를 들어, Label Set인 Y가 25%의 0과 75%의 1로 이루어진 Binary Set일 때, stratify=Y로 설정하면 나누어진 데이터셋들도 0과 1을 각각 25%, 75%로 유지한 채 분할된다.```
###Code
# 피처 데이터 세트 X, 레이블 데이터 세트 y를 추출.
X = diabetes_data.iloc[:, :-1]
y = diabetes_data.iloc[:, -1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 156, stratify=y)
# 로지스틱 회귀로 학습,예측 및 평가 수행.
lr_clf = LogisticRegression(max_iter=1000,)
lr_clf.fit(X_train , y_train)
y_pred = lr_clf.predict(X_test)
y_pred
accuracy = accuracy_score(y_test , y_pred)
print("Accuracy : ",round(accuracy,76))
print(500/diabetes_data['Outcome'].value_counts().sum())
###Output
0.6510416666666666
###Markdown
5) Confusion Matrix(오차행렬)
###Code
# #calculate AUC of model
# pred_proba = lr_clf.predict_proba(X_test)
# pred_proba_c1 = pred_proba[:,1].reshape(-1,1)
# auc = roc_auc_score(y_test, pred_proba_c1)
# #print AUC score
# print(auc)
roc_auc = roc_auc_score(y_test, pred_proba[:,1]) # calculate AUC of model
confusion = confusion_matrix( y_test, y_pred)
print('AUC score:', roc_auc)
print('오차 행렬')
print(confusion)
import pandas as pd
import seaborn as sns
matrix = pd.DataFrame(confusion,
columns = ['Positive','Negative'],
index= ['True','False']
)
sns.heatmap(matrix, annot=True, cmap='Blues', fmt='d')
from sklearn.metrics import roc_curve
# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(y_test, pred_proba[:,1], pos_label=1)
# fpr2, tpr2, thresh2 = roc_curve(y_test, pred_prob2[:,1], pos_label=1)
#
# roc curve for tpr = fpr
# random_probs = [0 for i in range(len(y_test))]
# p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1)
import matplotlib.pyplot as plt
# plt.style.use('seaborn')
# plot roc curves
plt.plot(fpr1, tpr1, linestyle='--',color='orange', label='Logistic Regression')
# plt.plot(fpr2, tpr2, linestyle='--',color='green', label='KNN')
# plt.plot(p_fpr, p_tpr, linestyle='--', color='blue')
plt.plot([0,1],[0,1],linestyle='--', color='blue')
# title
plt.title('ROC curve for Classification')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')
plt.legend(loc='best')
# plt.savefig('ROC',dpi=300)
plt.show();
###Output
_____no_output_____
###Markdown
6) Threshold(입계값) 변경하며 성능측정하기
###Code
thresholds = [0.3 , 0.33 ,0.36,0.39, 0.42 , 0.45 ,0.48, 0.50]
# pred_proba = lr_clf.predict_proba(X_test)
pred_proba_c1 = pred_proba[:,1].reshape(-1,1)
from sklearn.preprocessing import Binarizer
for custom_threshold in thresholds:
binarizer = Binarizer(threshold=custom_threshold).fit(pred_proba_c1)
custom_predict = binarizer.transform(pred_proba_c1)
print('Threshold:',custom_threshold)
accuracy = accuracy_score(y_test , custom_predict)
print("Accuracy: ",round(accuracy,3))
print(" ")
###Output
Threshold: 0.3
Accuracy: 0.487
Threshold: 0.33
Accuracy: 0.487
Threshold: 0.36
Accuracy: 0.494
Threshold: 0.39
Accuracy: 0.526
Threshold: 0.42
Accuracy: 0.539
Threshold: 0.45
Accuracy: 0.552
Threshold: 0.48
Accuracy: 0.558
Threshold: 0.5
Accuracy: 0.565
###Markdown
7) 교차검증일반적으로 회귀에는 기본 k-겹 교차검증을 사용하고, 분류에는 StratifiedKFold를 사용한다.데이터가 편항되어 단순 k-겹 교차검증을 사용하면 성능 평가가 잘 되지 않을 수 있기때문이다.leave-one-out -->
###Code
# cross_validation
from sklearn.model_selection import KFold, StratifiedKFold, LeaveOneOut
kfold = KFold(n_splits=5)
sfold = StratifiedKFold()
# loo = LeaveOneOut()
from sklearn.model_selection import cross_val_score
lr_clf = LogisticRegression(max_iter=1000,)
kfold_score = cross_val_score(lr_clf, X, y, cv=kfold)
sfold_score = cross_val_score(lr_clf, X, y, cv=sfold)
# loo_score = cross_val_score(lr_clf, X, y, cv=loo)
print('Kfold 정확도: {:.2f} %'.format(kfold_score.mean()*100))
print('StratifiedKFold 정확도: {:.2f}'.format(sfold_score.mean()))
# print('LeaveOneOut 정확도: {:.2f}'.format(loo_score.mean()))
###Output
Kfold 정확도: 77.09 %
StratifiedKFold 정확도: 0.77
LeaveOneOut 정확도: 0.78
|
curso-mlflow-alura/project/notebooks/model_training.ipynb | ###Markdown
House Prices PredictionNotebook with a basic model to use as an example in MLflow.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
import math
import mlflow
from xgboost import XGBRegressor
df = pd.read_csv('../data/processed/casas.csv')
df.head()
X = df.drop('preco', axis = 1)
y = df['preco'].copy()
X.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
mlflow.set_experiment('house_prices_eda')
###Output
INFO: 'house_prices_eda' does not exist. Creating a new experiment
###Markdown
Linear Regression
###Code
mlflow.start_run()
lr_model = LinearRegression()
lr_model.fit(X_train, y_train)
mlflow.sklearn.log_model(lr_model, 'lr')
lr_predicted = lr_model.predict(X_test)
mse_lr = mean_squared_error(y_test, lr_predicted)
rmse_lr = math.sqrt(mse_lr)
r2_lr = r2_score(y_test, lr_predicted)
print(f'Linear Regression Model\nMSE: {mse_lr}\nRMSE: {rmse_lr}\nR2: {r2_lr}')
mlflow.log_metric('mse', mse_lr)
mlflow.log_metric('rmse', rmse_lr)
mlflow.log_metric('r2', r2_lr)
mlflow.end_run()
###Output
_____no_output_____
###Markdown
XGBoost
###Code
xgb_params = {
'learning_rate': 0.2,
'n_estimators': 50,
'random_state': 42
}
with mlflow.start_run():
xgb_model = XGBRegressor(**xgb_params)
xgb_model.fit(X_train, y_train)
mlflow.xgboost.log_model(xgb_model, 'xgboost')
xgb_predicted = xgb_model.predict(X_test)
mse_xgb = mean_squared_error(y_test, xgb_predicted)
rmse_xgb = math.sqrt(mse_xgb)
r2_xgb = r2_score(y_test, xgb_predicted)
print(f'XGBRegressor\nMSE: {mse_xgb}\nRMSE: {rmse_xgb}\nR2: {r2_xgb}')
mlflow.log_metric('mse', mse_xgb)
mlflow.log_metric('rmse', rmse_xgb)
mlflow.log_metric('r2', r2_xgb)
###Output
XGBRegressor
MSE: 1386727460.1346002
RMSE: 37238.789724353286
R2: 0.8012741720529797
|
examples/Notebooks/flopy3_swi2package_ex4.ipynb | ###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
3.7.7 (default, Mar 26 2020, 10:32:53)
[Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.19.2
matplotlib version: 3.3.0
flopy version: 3.3.2
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
3.8.10 (default, May 19 2021, 11:01:55)
[Clang 10.0.0 ]
numpy version: 1.19.2
matplotlib version: 3.4.2
flopy version: 3.3.4
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 18:42:56)
[Clang 10.0.1 ]
numpy version: 1.18.5
matplotlib version: 3.2.2
flopy version: 3.3.3
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join("..", ".."))
sys.path.append(fpth)
import flopy
print(sys.version)
print("numpy version: {}".format(np.__version__))
print("matplotlib version: {}".format(mpl.__version__))
print("flopy version: {}".format(flopy.__version__))
###Output
3.8.11 (default, Aug 6 2021, 08:56:27)
[Clang 10.0.0 ]
numpy version: 1.19.2
matplotlib version: 3.4.2
flopy version: 3.3.5
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
# Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = "mf2005"
if platform.system() == "Windows":
exe_name = "mf2005.exe"
workspace = os.path.join("data")
# make sure workspace directory exists
if not os.path.isdir(workspace):
os.makedirs(workspace, exist_ok=True)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200.0, 365.25 * 12.0, 365.25 * 18.0]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10.0, -30.0, -50.0])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype=int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.0
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.0
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0: lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.0
lrcqsw[1, 3] = -25.0
# create well dictionary
base_well_data = {0: lrcq, 1: lrcqw}
swwells_well_data = {0: lrcq, 1: lrcqw, 2: lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult # 0.005
tipslope = nu[1] / numult # 0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz = 0.2
# swi2 observations
obsnam = ["layer1_", "layer2_"]
obslrc = [[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {
(0, 199): ["print budget", "save head"],
(0, 200): [],
(0, 399): ["print budget", "save head"],
(0, 400): [],
(0, 599): ["print budget", "save head"],
(0, 600): [],
(0, 799): ["print budget", "save head"],
(0, 800): [],
(0, 999): ["print budget", "save head"],
(1, 0): [],
(1, 59): ["print budget", "save head"],
(1, 60): [],
(1, 119): ["print budget", "save head"],
(1, 120): [],
(2, 0): [],
(2, 59): ["print budget", "save head"],
(2, 60): [],
(2, 119): ["print budget", "save head"],
(2, 120): [],
(2, 179): ["print budget", "save head"],
}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = "swiex4_s1"
ml = flopy.modflow.Modflow(
modelname, version="mf2005", exe_name=exe_name, model_ws=workspace
)
discret = flopy.modflow.ModflowDis(
ml,
nlay=nlay,
nrow=nrow,
ncol=ncol,
laycbd=0,
delr=delr,
delc=delc,
top=botm[0],
botm=botm[1:],
nper=nper,
perlen=perlen,
nstp=nstp,
)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(
ml,
nsrf=1,
istrat=1,
toeslope=toeslope,
tipslope=tipslope,
nu=nu,
zeta=z,
ssz=ssz,
isource=iso,
nsolver=1,
nadptmx=nadptmx,
nadptmn=nadptmn,
nobs=nobs,
iswiobs=iswiobs,
obsnam=obsnam,
obslrc=obslrc,
iswizt=55,
)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(
ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50
)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = "swiex4_s2"
ml2 = flopy.modflow.Modflow(
modelname2, version="mf2005", exe_name=exe_name, model_ws=workspace
)
discret = flopy.modflow.ModflowDis(
ml2,
nlay=nlay,
nrow=nrow,
ncol=ncol,
laycbd=0,
delr=delr,
delc=delc,
top=botm[0],
botm=botm[1:],
nper=nper,
perlen=perlen,
nstp=nstp,
)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(
ml2,
nsrf=1,
istrat=1,
toeslope=toeslope,
tipslope=tipslope,
nu=nu,
zeta=z,
ssz=ssz,
isource=iso,
nsolver=1,
nadptmx=nadptmx,
nadptmn=nadptmn,
nobs=nobs,
iswiobs=iswiobs,
obsnam=obsnam,
obslrc=obslrc,
iswizt=55,
)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(
ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50
)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(
os.path.join(ml.model_ws, modelname + ".zta")
)
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text="ZETASRF 1")[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(
os.path.join(ml.model_ws, modelname + ".zobs.out"), names=True
)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(
os.path.join(ml2.model_ws, modelname2 + ".zta")
)
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text="ZETASRF 1")[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(
os.path.join(ml2.model_ws, modelname2 + ".zobs.out"), names=True
)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.0
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet # winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({"legend.fontsize": 6, "legend.frameon": False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor="w")
fig.subplots_adjust(
wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop
)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(
xcell,
zeta[idx, 0, 30, :],
drawstyle="steps-mid",
linewidth=0.5,
color=cc[idx],
label="{:2d} years".format(years[idx]),
)
# layer 2
ax.plot(
xcell,
zeta[idx, 1, 30, :],
drawstyle="steps-mid",
linewidth=0.5,
color=cc[idx],
label="_None",
)
ax.plot([-1500, 1500], [-30, -30], color="k", linewidth=1.0)
# legend
plt.legend(loc="lower left")
# axes labels and text
ax.set_xlabel("Horizontal distance, in meters")
ax.set_ylabel("Elevation, in meters")
ax.text(
0.025,
0.55,
"Layer 1",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.025,
0.45,
"Layer 2",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.975,
0.1,
"Recharge conditions",
transform=ax.transAxes,
va="center",
ha="right",
size="8",
)
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(
xcell,
zeta[idx, 0, 30, :],
drawstyle="steps-mid",
linewidth=0.5,
color=cc[idx - 5],
label="{:2d} years".format(years[idx]),
)
# layer 2
ax.plot(
xcell,
zeta[idx, 1, 30, :],
drawstyle="steps-mid",
linewidth=0.5,
color=cc[idx - 5],
label="_None",
)
ax.plot([-1500, 1500], [-30, -30], color="k", linewidth=1.0)
# legend
plt.legend(loc="lower left")
# axes labels and text
ax.set_xlabel("Horizontal distance, in meters")
ax.set_ylabel("Elevation, in meters")
ax.text(
0.025,
0.55,
"Layer 1",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.025,
0.45,
"Layer 2",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.975,
0.1,
"Freshwater well withdrawal",
transform=ax.transAxes,
va="center",
ha="right",
size="8",
)
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(
xcell,
zeta2[idx, 0, 30, :],
drawstyle="steps-mid",
linewidth=0.5,
color=cc[idx - 5],
label="{:2d} years".format(years[idx]),
)
# layer 2
ax.plot(
xcell,
zeta2[idx, 1, 30, :],
drawstyle="steps-mid",
linewidth=0.5,
color=cc[idx - 5],
label="_None",
)
ax.plot([-1500, 1500], [-30, -30], color="k", linewidth=1.0)
# legend
plt.legend(loc="lower left")
# axes labels and text
ax.set_xlabel("Horizontal distance, in meters")
ax.set_ylabel("Elevation, in meters")
ax.text(
0.025,
0.55,
"Layer 1",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.025,
0.45,
"Layer 2",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.975,
0.1,
"Freshwater and saltwater\nwell withdrawals",
transform=ax.transAxes,
va="center",
ha="right",
size="8",
)
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs["TOTIM"][999:] / 365 - 200.0
tz2 = zobs["layer1_001"][999:]
tz3 = zobs2["layer1_001"][999:]
for i in range(len(t)):
if zobs["layer2_001"][i + 999] < -30.0 - 0.1:
tz2[i] = zobs["layer2_001"][i + 999]
if zobs2["layer2_001"][i + 999] < 20.0 - 0.1:
tz3[i] = zobs2["layer2_001"][i + 999]
ax.plot(
t,
tz2,
linestyle="solid",
color="r",
linewidth=0.75,
label="Freshwater well",
)
ax.plot(
t,
tz3,
linestyle="dotted",
color="r",
linewidth=0.75,
label="Freshwater and saltwater well",
)
ax.plot([0, 30], [-30, -30], "k", linewidth=1.0, label="_None")
# legend
leg = plt.legend(loc="lower right", numpoints=1)
# axes labels and text
ax.set_xlabel("Time, in years")
ax.set_ylabel("Elevation, in meters")
ax.text(
0.025,
0.55,
"Layer 1",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
)
ax.text(
0.025,
0.45,
"Layer 2",
transform=ax.transAxes,
va="center",
ha="left",
size="7",
);
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt / 2))
fig.subplots_adjust(
wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop
)
colors = ["#40d3f7", "#F76541"]
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(
model=ml, line={"Row": 30}, extent=(0, 3050, -50, -10)
)
modelxsect.plot_fill_between(
zeta[4, :, :, :], colors=colors, ax=ax, edgecolors="none"
)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title("Recharge year {}".format(years[4]))
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title("Scenario year {}".format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
%matplotlib inline
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
3.6.4 | packaged by conda-forge | (default, Dec 23 2017, 16:54:01)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)]
numpy version: 1.14.0
matplotlib version: 2.1.2
flopy version: 3.2.9
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.ModelCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
flopy is installed in /Users/jdhughes/Documents/Development/flopy_git/flopy_fork/flopy
3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56)
[Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.17.3
matplotlib version: 3.1.1
flopy version: 3.3.1
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
flopy is installed in /Users/jdhughes/Documents/Development/flopy_git/flopy_us/flopy
3.7.3 (default, Mar 27 2019, 16:54:48)
[Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.16.2
matplotlib version: 3.0.3
flopy version: 3.2.12
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
flopy is installed in /Users/jdhughes/Documents/Development/flopy_git/flopy_us/flopy
3.6.7 | packaged by conda-forge | (default, Feb 28 2019, 02:16:08)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.14.5
matplotlib version: 2.2.2
flopy version: 3.2.11
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
flopy is installed in /Users/jdhughes/Documents/Development/flopy_git/flopy_fork/flopy
3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56)
[Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.17.3
matplotlib version: 3.1.1
flopy version: 3.3.0
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
3.7.7 (default, Mar 26 2020, 10:32:53)
[Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.19.2
matplotlib version: 3.3.0
flopy version: 3.3.2
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.PlotCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____
###Markdown
FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island SystemThis example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable.The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (`DELR`), 50 m (`DELC`), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days).The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (`NSRF=1`) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (`ISTRAT=1`). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a `TOESLOPE` and `TIPSLOPE` of 0.005, a default `ALPHA` of 0.1, and a default `BETA` of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 `ISOURCE` parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. `ISOURCE` in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 `ISOURCE` parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active `ZETA` surface in the cell.A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawingsaltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface fromupconing into the upper aquifer (model layer). Import `numpy` and `matplotlib`, set all figures to be inline, import `flopy.modflow` and `flopy.utils`.
###Code
%matplotlib inline
import os
import sys
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
###Output
3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:44:09)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)]
numpy version: 1.14.5
matplotlib version: 2.2.2
flopy version: 3.2.10
###Markdown
Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named `ml` and specify that this is a MODFLOW-2005 model.
###Code
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
###Output
_____no_output_____
###Markdown
Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps.
###Code
ncol = 61
nrow = 61
nlay = 2
nper = 3
perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.]
nstp = [1000, 120, 180]
save_head = [200, 60, 60]
steady = True
###Output
_____no_output_____
###Markdown
Specify the cell size along the rows (`delr`) and along the columns (`delc`) and the top and bottom of the aquifer for the `DIS` package.
###Code
# dis data
delr, delc = 50.0, 50.0
botm = np.array([-10., -30., -50.])
###Output
_____no_output_____
###Markdown
Define the `IBOUND` array and starting heads for the `BAS` package. The corners of the model are defined to be inactive.
###Code
# bas data
# ibound - active except for the corners
ibound = np.ones((nlay, nrow, ncol), dtype= np.int)
ibound[:, 0, 0] = 0
ibound[:, 0, -1] = 0
ibound[:, -1, 0] = 0
ibound[:, -1, -1] = 0
# initial head data
ihead = np.zeros((nlay, nrow, ncol), dtype=np.float)
###Output
_____no_output_____
###Markdown
Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the `LPF` package.
###Code
# lpf data
laytyp = 0
hk = 10.
vka = 0.2
###Output
_____no_output_____
###Markdown
Define the boundary condition data for the model
###Code
# boundary condition data
# ghb data
colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow))
index = np.zeros((nrow, ncol), dtype=np.int)
index[:, :10] = 1
index[:, -10:] = 1
index[:10, :] = 1
index[-10:, :] = 1
nghb = np.sum(index)
lrchc = np.zeros((nghb, 5))
lrchc[:, 0] = 0
lrchc[:, 1] = rowcell[index == 1]
lrchc[:, 2] = colcell[index == 1]
lrchc[:, 3] = 0.
lrchc[:, 4] = 50.0 * 50.0 / 40.0
# create ghb dictionary
ghb_data = {0:lrchc}
# recharge data
rch = np.zeros((nrow, ncol), dtype=np.float)
rch[index == 0] = 0.0004
# create recharge dictionary
rch_data = {0: rch}
# well data
nwells = 2
lrcq = np.zeros((nwells, 4))
lrcq[0, :] = np.array((0, 30, 35, 0))
lrcq[1, :] = np.array([1, 30, 35, 0])
lrcqw = lrcq.copy()
lrcqw[0, 3] = -250
lrcqsw = lrcq.copy()
lrcqsw[0, 3] = -250.
lrcqsw[1, 3] = -25.
# create well dictionary
base_well_data = {0:lrcq, 1:lrcqw}
swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw}
# swi2 data
nadptmx = 10
nadptmn = 1
nu = [0, 0.025]
numult = 5.0
toeslope = nu[1] / numult #0.005
tipslope = nu[1] / numult #0.005
z1 = -10.0 * np.ones((nrow, ncol))
z1[index == 0] = -11.0
z = np.array([[z1, z1]])
iso = np.zeros((nlay, nrow, ncol), dtype=np.int)
iso[0, :, :][index == 0] = 1
iso[0, :, :][index == 1] = -2
iso[1, 30, 35] = 2
ssz=0.2
# swi2 observations
obsnam = ['layer1_', 'layer2_']
obslrc=[[0, 30, 35], [1, 30, 35]]
nobs = len(obsnam)
iswiobs = 1051
###Output
_____no_output_____
###Markdown
Create output control (OC) data using words
###Code
# oc data
spd = {(0,199): ['print budget', 'save head'],
(0,200): [],
(0,399): ['print budget', 'save head'],
(0,400): [],
(0,599): ['print budget', 'save head'],
(0,600): [],
(0,799): ['print budget', 'save head'],
(0,800): [],
(0,999): ['print budget', 'save head'],
(1,0): [],
(1,59): ['print budget', 'save head'],
(1,60): [],
(1,119): ['print budget', 'save head'],
(1,120): [],
(2,0): [],
(2,59): ['print budget', 'save head'],
(2,60): [],
(2,119): ['print budget', 'save head'],
(2,120): [],
(2,179): ['print budget', 'save head']}
###Output
_____no_output_____
###Markdown
Create the model with the freshwater well (Simulation 1)
###Code
modelname = 'swiex4_s1'
ml = flopy.modflow.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml, stress_period_data=base_well_data)
ghb = flopy.modflow.ModflowGhb(ml, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 1 MODFLOW input files and run the model
###Code
ml.write_input()
ml.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Create the model with the saltwater well (Simulation 2)
###Code
modelname2 = 'swiex4_s2'
ml2 = flopy.modflow.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace)
discret = flopy.modflow.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0,
delr=delr, delc=delc, top=botm[0], botm=botm[1:],
nper=nper, perlen=perlen, nstp=nstp)
bas = flopy.modflow.ModflowBas(ml2, ibound=ibound, strt=ihead)
lpf = flopy.modflow.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka)
wel = flopy.modflow.ModflowWel(ml2, stress_period_data=swwells_well_data)
ghb = flopy.modflow.ModflowGhb(ml2, stress_period_data=ghb_data)
rch = flopy.modflow.ModflowRch(ml2, rech=rch_data)
swi = flopy.modflow.ModflowSwi2(ml2, nsrf=1, istrat=1,
toeslope=toeslope, tipslope=tipslope, nu=nu,
zeta=z, ssz=ssz, isource=iso, nsolver=1,
nadptmx=nadptmx, nadptmn=nadptmn,
nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc, iswizt=55)
oc = flopy.modflow.ModflowOc(ml2, stress_period_data=spd)
pcg = flopy.modflow.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50)
###Output
ModflowSwi2: specification of nobs is deprecated.
###Markdown
Write the simulation 2 MODFLOW input files and run the model
###Code
ml2.write_input()
ml2.run_model(silent=True)
###Output
_____no_output_____
###Markdown
Load the simulation 1 `ZETA` data and `ZETA` observations.
###Code
# read base model zeta
zfile = flopy.utils.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta'))
kstpkper = zfile.get_kstpkper()
zeta = []
for kk in kstpkper:
zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta = np.array(zeta)
# read swi obs
zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Load the simulation 2 `ZETA` data and `ZETA` observations.
###Code
# read saltwater well model zeta
zfile2 = flopy.utils.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta'))
kstpkper = zfile2.get_kstpkper()
zeta2 = []
for kk in kstpkper:
zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0])
zeta2 = np.array(zeta2)
# read swi obs
zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs.out'), names=True)
###Output
_____no_output_____
###Markdown
Create arrays for the x-coordinates and the output years
###Code
x = np.linspace(-1500, 1500, 61)
xcell = np.linspace(-1500, 1500, 61) + delr / 2.
xedge = np.linspace(-1525, 1525, 62)
years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30]
###Output
_____no_output_____
###Markdown
Define figure dimensions and colors used for plotting `ZETA` surfaces
###Code
# figure dimensions
fwid, fhgt = 8.00, 5.50
flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925
# line color definition
icolor = 5
colormap = plt.cm.jet #winter
cc = []
cr = np.linspace(0.9, 0.0, icolor)
for idx in cr:
cc.append(colormap(idx))
###Output
_____no_output_____
###Markdown
Recreate **Figure 9** from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/).
###Code
plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False})
fig = plt.figure(figsize=(fwid, fhgt), facecolor='w')
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
# first plot
ax = fig.add_subplot(2, 2, 1)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8')
# second plot
ax = fig.add_subplot(2, 2, 2)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8')
# third plot
ax = fig.add_subplot(2, 2, 3)
# axes limits
ax.set_xlim(-1500, 1500)
ax.set_ylim(-50, -10)
for idx in range(5, len(years)):
# layer 1
ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx]))
# layer 2
ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid',
linewidth=0.5, color=cc[idx-5], label='_None')
ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0)
# legend
plt.legend(loc='lower left')
# axes labels and text
ax.set_xlabel('Horizontal distance, in meters')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes,
va='center', ha='right', size='8')
# fourth plot
ax = fig.add_subplot(2, 2, 4)
# axes limits
ax.set_xlim(0, 30)
ax.set_ylim(-50, -10)
t = zobs['TOTIM'][999:] / 365 - 200.
tz2 = zobs['layer1_001'][999:]
tz3 = zobs2['layer1_001'][999:]
for i in range(len(t)):
if zobs['layer2_001'][i+999] < -30. - 0.1:
tz2[i] = zobs['layer2_001'][i+999]
if zobs2['layer2_001'][i+999] < 20. - 0.1:
tz3[i] = zobs2['layer2_001'][i+999]
ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well')
ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well')
ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None')
# legend
leg = plt.legend(loc='lower right', numpoints=1)
# axes labels and text
ax.set_xlabel('Time, in years')
ax.set_ylabel('Elevation, in meters')
ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7')
ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7');
###Output
_____no_output_____
###Markdown
Use `ModelCrossSection` plotting class and `plot_fill_between()` method to fill between zeta surfaces.
###Code
fig = plt.figure(figsize=(fwid, fhgt/2))
fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop)
colors = ['#40d3f7', '#F76541']
ax = fig.add_subplot(1, 2, 1)
modelxsect = flopy.plot.ModelCrossSection(model=ml, line={'Row': 30},
extent=(0, 3050, -50, -10))
modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax,
edgecolors='none')
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Recharge year {}'.format(years[4]));
ax = fig.add_subplot(1, 2, 2)
ax.set_xlim(0, 3050)
ax.set_ylim(-50, -10)
modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax)
linecollection = modelxsect.plot_grid(ax=ax)
ax.set_title('Scenario year {}'.format(years[-1]));
###Output
_____no_output_____ |
data/lec1_3_Categorizing_and_Groupby_ForOnelineLecture.ipynb | ###Markdown
데이터 준비
###Code
df = pd.read_csv("my_data/naver_finance/2016_12.csv")
df.head()
###Output
_____no_output_____
###Markdown
수익률 구하기 (16.12 ~ 17.12)
###Code
df['rtn'] = df['price2'] / df['price'] - 1
###Output
_____no_output_____
###Markdown
PER 값에 따라 group number 부여하기 값을 기준으로 grouping 하기 (DIFFERENT number of members in each group) boolean selection & loc 사용 - 곧 뒤에서 배울 `cut()` 을 사용하면 아래 방법보다 더 쉽게 가능합니다. 하지만 여기서 진행하는 방식들도 매우 중요하니 반드시 익혀두세요!
###Code
(df['PER(배)'] >= 10).head()
bound1 = df['PER(배)'] >= 10
bound2 = (5 <= df['PER(배)']) & (df['PER(배)'] < 10)
bound3 = (0 <= df['PER(배)']) & (df['PER(배)'] < 5)
bound4 = df['PER(배)'] < 0
df.shape
df[bound1].shape # = df.loc[bound1].shape
df.loc[bound1, 'PER_Score'] = 1
df.loc[bound2, 'PER_Score'] = 2
df.loc[bound3, 'PER_Score'] = 3
df.loc[bound4, 'PER_Score'] = -1
df['PER_Score'].head()
df['PER_Score'].nunique()
df['PER_Score'].value_counts()
###Output
_____no_output_____
###Markdown
- `PER_Score`가 float number로 나오는 이유?
###Code
df['PER_Score'].hasnans
df['PER_Score'].isna().sum()
df['PER(배)'].isna().sum()
df[df['PER(배)'].isna()]
df.loc[df['PER_Score'].isna(), "PER_Score"] = 0
# 아래와 같은 방식으로도 가능
# df['PER_Score'] = df['PER_Score'].fillna(0)
# df.loc[:, 'PER_Score'] = df['PER_Score'].fillna(0)
###Output
_____no_output_____
###Markdown
boolean series 의 연산 특성 사용
###Code
df.loc[:, "PER_Score1"] = (bound1 * 1) + (bound2 * 2) + (bound3 * 3) + (bound4 * -1)
df['PER_Score1'].head()
df['PER_Score1'].value_counts()
df['PER_Score'].value_counts()
###Output
_____no_output_____
###Markdown
위의 두 score series는 서로 같을까?
###Code
df['PER_Score'].equals(df['PER_Score1'])
df['PER_Score'].dtypes
df['PER_Score1'].dtypes
df['PER_Score'].astype(int).equals(df['PER_Score1'])
###Output
_____no_output_____
###Markdown
`cut()`
###Code
per_cuts = pd.cut(
df['PER(배)'],
[-np.inf, 0, 5, 10, np.inf],
)
per_cuts.head()
per_cuts.iloc[0]
per_cuts.value_counts()
per_cuts.isna().sum()
###Output
_____no_output_____
###Markdown
- cut()과 동시에 label 달아주기
###Code
bins = [-np.inf, 10, 20, np.inf]
labels = ['저평가주', '보통주', '고평가주']
per_cuts2 = pd.cut(
df['PER(배)'],
bins=bins,
labels=labels
)
per_cuts2.head()
# df.loc[:, 'PER_score2'] = per_cuts # or per_cuts2
# df['PER_score2'] = per_cuts # or per_cuts2
###Output
_____no_output_____
###Markdown
Group내 데이터 갯수를 기준으로 grouping 하기 (SAME number of members in each group) `qcut()`
###Code
pd.qcut(df['PER(배)'], 3, labels=[1,2,3]).head()
df.loc[:, 'PER_Score2'] = pd.qcut(df['PER(배)'], 10, labels=range(1, 11))
df.head()
df['PER_Score2'].value_counts()
df['PER_Score2'].hasnans
df['PER_Score2'].isna().sum()
df['PER_Score2'].dtype
###Output
_____no_output_____
###Markdown
- 'category' type: A string variable consisting of only a few different values
###Code
# DataFrame에서 category dtype인 columns들 추출하기
# df.select_dtypes(include=['category']).columns
df['PER_Score2'].head()
df['PER_Score2'].value_counts()
df = df.dropna(subset=['PER(배)'])
df['PER_Score2'].isna().sum()
###Output
_____no_output_____
###Markdown
Split - Apply - Combine
###Code
df = pd.read_csv("my_data/naver_finance/2016_12.csv")
df.shape
df = df.dropna()
df.shape
g_df = df.copy()
g_df.head()
###Output
_____no_output_____
###Markdown
Group score 생성
###Code
g_df['rtn'] = g_df['price2'] / g_df['price'] - 1
g_df.loc[:, 'PER_score'] = pd.qcut(g_df['PER(배)'], 10, labels=range(1, 11))
g_df.loc[:, 'PBR_score'] = pd.qcut(g_df['PBR(배)'], 10, labels=range(1, 11))
g_df.set_index('ticker', inplace=True)
g_df.head()
g_df.get_dtype_counts()
###Output
_____no_output_____
###Markdown
groupby() & aggregation - `groupby()` - 실제로 grouping까지는 하지 않고, grouping이 가능한지 validation만 진행(preparation)- Aggregation - 2가지 요소로 구성 - aggregating columns - aggregating functions - e.g. `sum, min, max, mean, count, variacne, std` etc - 결국, 3가지 요소만 충족시키면 됨! - Grouping columns (cateogorial data type) - Aggregating columns - Aggregating functions `groupby` object 살펴보기
###Code
g_df.groupby('PER_score')
g_df_obj = g_df.groupby(["PBR_score", "PER_score"])
g_df_obj
type(g_df_obj)
g_df_obj.ngroups
g_df['PBR_score'].nunique()
g_df['PER_score'].nunique()
###Output
_____no_output_____
###Markdown
- "ngroups와 (g_df['PBR_score'].nunique() x g_df['PER_score'].nunique())가 차이가 나는 이유"에 대해서 생각해보기
###Code
type(g_df_obj.size())
g_df_obj.size().head()
# Multi-level index를 가진 Series indexing하는 법
g_df_obj.size().loc[1]
g_df_obj.size().loc[(1, 1)]
# Series -> DataFrame으로 변환
g_df_obj.size().to_frame().head()
type(g_df_obj.groups)
g_df_obj.groups.keys()
g_df_obj.groups.values ()
# Retrieve specific group
g_df_obj.get_group((1, 1))
###Output
_____no_output_____
###Markdown
- For loop을 이용해서 grouping된 object 확인해보기 (많이는 안쓰임)
###Code
for name, group in g_df_obj:
print(name)
group.head(2)
break
# 참고 :groupby()에 대해 head()를 적용하면, 기존이 head()가 작동하는 방식, 즉, 최상위 2개를 가지고 오는게 아니라
# 각 그룹별 최상위 2개를 무작위로 섞어서 하나로 합친 DataFrame을 리턴함
g_df.groupby('PER_score').head(2)
###Output
_____no_output_____
###Markdown
aggreggation - 반드시 "aggregating" 기능이 있는 function 을 써야함 - min, max, mean, median, sum, var, size, nunique, idxmax
###Code
g_df.groupby("PBR_score").agg(
{
"rtn": "mean", # = np.mean
}
)
pbr_rtn_df = g_df.groupby("PBR_score").agg({'rtn': 'mean'})
per_rtn_df = g_df.groupby("PER_score").agg({'rtn': 'mean'})
pbr_rtn_df.head()
# 다양한 방법으로 진행하기 (같은 결과)
g_df.groupby("PER_score")['rtn'].agg('mean').head()
g_df.groupby("PER_score")['rtn'].agg(np.mean).head()
g_df.groupby("PER_score")['rtn'].mean().head()
# return type이 다를 수 있음에 주의
g_df.groupby("PER_score")['rtn'].agg("mean").head(2) # Series로 return
g_df.groupby("PER_score")[['rtn']].agg("mean").head(2) # DataFrame으로 return
# 2개 이상의 컬럼에 대해 aggregation
g_df.groupby("PER_score")[['rtn', 'PBR(배)']].agg("mean").head(2)
# 2개 이상의 aggregation
g_df.groupby("PER_score")[['rtn', 'PBR(배)']].agg(["mean", "std"]).head(2)
# 2개 이상의 컬럼 & 각각에 대해 다른 aggregation
g_df.groupby("PBR_score").agg(
{
'rtn': ['mean', 'std'],
'PER(배)': ['min']
}
)
###Output
_____no_output_____
###Markdown
- aggregation function이 아닌경우 => `agg()`가 error를 발생시킴
###Code
# sqrt는 aggregation 방식의 연산이 아님!
np.sqrt([1, 2, 3, 4])
g_df.groupby("PER_score")['rtn'].agg(np.sqrt)
###Output
_____no_output_____
###Markdown
- Visualization(시각화) 맛보기
###Code
%matplotlib inline
pbr_rtn_df.plot(kind='bar')
pbr_rtn_df.plot(kind='bar');
per_rtn_df.plot(kind='bar');
###Output
_____no_output_____
###Markdown
Examples
###Code
g_df1 = g_df.groupby(["PBR_score", "PER_score"])\
.agg(
{
'rtn': ['mean', 'std', 'min', 'max'],
'ROE(%)': [np.mean, 'size', 'nunique', 'idxmax']
}
)
g_df1.head()
a = g_df.groupby(["PBR_score", "PER_score"])['rtn', 'ROE(%)'].agg(['sum', 'mean'])
# Multi-index라고 해서 쫄 것 없음!
a.loc[1]
a.loc[(1, 3)]
a.loc[[(1, 3), (1, 4 )]]
###Output
_____no_output_____
###Markdown
주의: nan은 groupby시 자동으로 filter out 되기 때문에, 미리 전처리 다 하는게 좋음
###Code
df = pd.DataFrame({
'a':['소형주', np.nan, '대형주', '대형주'],
'b':[np.nan, 2, 3, np.nan],
})
df
df.groupby(['a'])['b'].mean()
###Output
_____no_output_____
###Markdown
`as_index = False` : group cols들이 index가 아니라 하나의 col이 됨 (aggregate하고 reset_index()를 취한 것)
###Code
a = g_df.groupby(["PER_score"] ).agg({'rtn': ['mean', 'std']}).head(2)
b = g_df.groupby(["PER_score"], as_index=False).agg({'rtn': ['mean', 'std']}).head(2)
a
b
a.index
a.columns
b.index
b.columns
a['rtn']
a[('rtn', 'mean')].head()
###Output
_____no_output_____
###Markdown
Multi-index columns을 하나로 병합하기
###Code
g_df1.head()
level0 = g_df1.columns.get_level_values(0)
level1 = g_df1.columns.get_level_values(1)
level0
level1
g_df1.columns = level0 + '_' + level1
g_df1.head(2)
g_df1 = g_df1.reset_index()
g_df1.head()
###Output
_____no_output_____
###Markdown
실전예제: 시가총액으로 Small and Big 나누기
###Code
a_df = pd.read_csv("my_data/Small_and_Big.csv", index_col=[0])
a_df.head()
a_df.tail()
median_df = a_df.groupby(['date']).agg({'시가총액 (보통)(평균)(원)': 'median'})
median_df.head()
median_df.columns = ['시가총액_median']
median_df.head()
###Output
_____no_output_____ |
Pizza_for_Bob.ipynb | ###Markdown
Bob's Pizza Probability The use of total expectation law in a discrete case*Every day Bob goes to the pizza shop, orders a slice of pizza, andpicks a topping—pepper, pepperoni, pineapple, prawns, or prosciutto—uniformly at random. On the day that Bob first picks pineapple, find the expected number of prior days in which he picked pepperoni.***The analytic solution is provided in the notes**
###Code
import numpy as np
topping = ["pepper", "pepperoni", "pineapple", "prawns", "prosciutto"]
def pineapple_pizza():
global topping
pepperoni_days=0 #num of prior days Bob ordered pepperoni
current = np.random.choice(topping)
while (current!="pineapple"):
if current=="pepperoni":
pepperoni_days+=1
current = np.random.choice(topping)
return pepperoni_days
# Run it 1000 times and calculate the mean of pepperoni_days
dayslst = [pineapple_pizza() for i in range(10000)]
print("Estimated expectation is: {}".format(np.mean(dayslst)))
###Output
Estimated expectation is: 0.992
|
Andrew_Lowe_1_1_3.ipynb | ###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
!pip install pandas==0.23.4
df = pd.read_csv('https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
df.head()
exercise_time_bins = pd.cut(df['exercise_time'], 3)
weight_bins = pd.cut(df['weight'], 3)
#Why does the first exercise bin start at -.3?
age_bins = pd.cut(df['age'], 3)
pd.crosstab(age_bins, exercise_time_bins, normalize='columns')
crosstab = pd.crosstab([age_bins, exercise_time_bins], weight_bins, normalize='columns')
crosstab.plot(kind='bar') #This graph is insanely hectic. But what I learned from it was that
#weight is correlated with amount of time exercised for each age bracket.
#So no matter the age, people tend to be in the lower weight bracket when they
#exercise more!
###Output
_____no_output_____
###Markdown
Assignment questionsAfter you've worked on some code, answer the following questions in this text block:1. What are the variable types in the data?2. What are the relationships between the variables?3. Which relationships are "real", and which spurious? 1. The variable types for age, weight and time spent exercizing are all numbers--integers, to be more precise.2. I see the strongest correlation between weight and exercise time. Age doesn't seem to correlate with weight very much. 3. The relationship between weight and exercise time seems to be "real". I'm not sure I see another relationship that's spurious, maybe if I switch the variables around and try and use different combinations to predict different variables I will find some. Stretch goals and resourcesFollowing are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.- [Spurious Correlations](http://tylervigen.com/spurious-correlations)- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)Stretch goals:- Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
###Code
###Output
_____no_output_____ |
19 Geopandas/Geodatenhandling 2.ipynb | ###Markdown
Geodatenhandling 2**Inhalt: ** Geopandas für Fortgeschrittene**Nötige Skills**- Basic pandas skills- Funktionen und pandas- Erste Schritte mit Geopandas- Geodatenhandling 1**Lernziele**- Punkte, Linien, Polygone revisited- Eigenschaften von geometrischen Shapes- Shapes modifizieren und kombinieren- Geodaten modifizieren und selektieren Das BeispielGeschäfte in Chicago.Wir checken: In welchen Stadtteilen gibt es keine Lebensmittelläden, wo sind die "Food deserts"- `Boundaries - Census Tracts - 2010.zip`, census tracts in Chicago from [here](https://data.cityofchicago.org/Facilities-Geographic-Boundaries/Boundaries-Census-Tracts-2010/5jrd-6zik)- `Grocery_Stores_-_2013.csv`, grocery stores in Chicago from [here](https://data.cityofchicago.org/Community-Economic-Development/Grocery-Stores-2013/53t8-wyrc)**Credits to:**- http://www.jonathansoma.com/lede/foundations-2017/ Setup
###Code
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point, LineString, Polygon
%matplotlib inline
###Output
_____no_output_____
###Markdown
GeometriesZum Aufwärmen, nochmals ein paar Shapes from scratch Point (again)
###Code
punkt1 = Point(5, 5)
punkt1
###Output
_____no_output_____
###Markdown
Line (again)
###Code
linie1 = LineString([Point(20, 0), Point(0, 20)])
linie1
linie2 = LineString([Point(15, 0), Point(0, 15)])
linie3 = LineString([Point(25, 0), Point(0, 25)])
###Output
_____no_output_____
###Markdown
Polygon (again)
###Code
polygon1 = Polygon([[0, 0], [10, 0], [10, 10], [0, 10]])
polygon1
###Output
_____no_output_____
###Markdown
**Let's plot it together!**
###Code
df = pd.DataFrame({'geometry': [punkt1, linie1, linie2, linie3, polygon1]})
gdf = gpd.GeoDataFrame(df, geometry='geometry')
gdf
gdf.plot(alpha=0.5, linewidth=2, edgecolor='black', markersize=5)
###Output
_____no_output_____
###Markdown
Shapes vergleichenWir können geometrische Shapes auf verschiedene Weise miteinander "vergleichen". * **contains:** has the other object TOTALLY INSIDE (boundaries can't touch!!!) "a neighborhood CONTAINS restaurants"* **intersects:** is OVERLAPPING at ALL, unless it's just boundaries touching* **touches:** only the boundaries touch, like a tangent* **within:** is TOTALLY INSIDE of the other object "a restaurant is WITHIN a neighborhood"* **disjoint:** no touching!!! no intersecting!!!!* **crosses:** goes through but isn't inside - "a river crossing through a city"Referenz und weitere Vergleiche: http://geopandas.org/reference.html) Das funktioniert ganz einfach:
###Code
polygon1.contains(punkt1)
punkt1.contains(polygon1)
###Output
_____no_output_____
###Markdown
**Quizfragen:**
###Code
#Liegt der Punkt 1 innerhalb von Polygon 1?
punkt1.within(polygon1)
#Berührt die Linie 1 das Polygon 1?
linie1.touches(polygon1)
#Überschneidet sich die Linie 3 mit dem Polygon 1?
linie3.intersects(polygon1)
#Überschneidet sich die Linie 2 mit dem Polygon 1?
linie2.intersects(polygon1)
#Ist das Polygon 1 völlig losgelöst von der Linie 3?
polygon1.disjoint(linie3)
###Output
_____no_output_____
###Markdown
ImportUnd nun zu unserem Beispiel: **Ein Stadtplan von Chicago mit den Quartieren (census tracts)**
###Code
tracts = gpd.read_file("dataprojects/Food Deserts/Boundaries - Census Tracts - 2010/geo_export_085dcd7b-113c-4a6d-8d43-5926de1dcc5b.shp")
tracts.head(2)
tracts.plot()
###Output
_____no_output_____
###Markdown
**Eine Liste aller Lebensmittelläden**
###Code
df = pd.read_csv("dataprojects/Food Deserts/Grocery_Stores_-_2013.csv")
df.head(2)
###Output
_____no_output_____
###Markdown
Um von Pandas zu Geopandas zu gelangen:- Geometrie erstellen- Geodataframe erstellen- Koordinatensystem intialisieren
###Code
points = df.apply(lambda row: Point(row.LONGITUDE, row.LATITUDE), axis=1)
grocery_stores = gpd.GeoDataFrame(df, geometry=points)
grocery_stores.crs = {'init': 'epsg:4326'}
grocery_stores.plot()
###Output
_____no_output_____
###Markdown
**Wir plotten mal alles zusammen**
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.plot(ax=ax, color='red', markersize=8, alpha = 0.8)
###Output
_____no_output_____
###Markdown
AnalyseUns interessiert: Wo sind die Gebiete, in denen es in einem bestimmten Umkreis von Metern keine Lebensmittelläden gibt?Um das zu beantworten, müssen wir zuerst in ein brauchbares Koordinatensystem wechseln, das auf Metern basiert. Projektion ändernWir entscheiden uns für eine Variante der Mercator-Projektion.Das ist praktisch, weil:- "Die wichtigste Eigenschaft der Mercator-Projektion ist ihre Winkeltreue. Diese bedeutet auch, dass in kleinen Bereichen der Längenmaßstab in allen Richtungen gleich ist." https://de.wikipedia.org/wiki/Mercator-Projektion- Die Koordinaten sind nicht in Längen-/Breitengrad, sondern in Metern angegeben (die CH-Koordinaten sind auch eine Variante der Mercator-Projektion)
###Code
grocery_stores = grocery_stores.to_crs({'proj': 'merc'}) # merc = in Meter gemessen
tracts = tracts.to_crs({'proj': 'merc'})
###Output
_____no_output_____
###Markdown
Andere Projektionen wären:- 'tmerc': transverse mercator- 'aea': albers equal area **Wir haben nun ein neues Koordinatensystem**
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.plot(ax=ax, color='red', markersize=8, alpha = 0.8)
###Output
_____no_output_____
###Markdown
Buffer erstellenWie sieht die Karte aus, wenn wir um jedes Lebensmittelgeschäft einen Kreis von 500 Metern ziehen?
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.buffer(500).plot(ax=ax, color='red', markersize=8, alpha=0.4)
###Output
_____no_output_____
###Markdown
UnionNächster Schritt: Wir fügen alle Punkte zu einer Fläche zusammen
###Code
near_area = grocery_stores.buffer(500).unary_union
###Output
_____no_output_____
###Markdown
Jetzt können wir testen, ob die einzelnen Quartiere diese Fläche berühren
###Code
tracts.disjoint(near_area)
tracts[tracts.disjoint(near_area)].plot()
###Output
_____no_output_____
###Markdown
PlotWir plotten dieselbe Karte wie vorher - und zusätzlich noch jene Tracts, welche die Punktefläche nicht berühren
###Code
#Bisherige Karte
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.buffer(500).plot(ax=ax, color='red', markersize=8, alpha=0.4)
#Neu: Desert-Tracts
tracts[tracts.disjoint(near_area)].plot(ax=ax, color='darkblue', alpha=0.4)
ax.set_title('City tracts that have no grocery store within 500m distance')
###Output
_____no_output_____
###Markdown
Geodatenhandling 2**Inhalt:** Geopandas für Fortgeschrittene**Nötige Skills**- Basic pandas skills- Funktionen und pandas- Erste Schritte mit Geopandas- Geodatenhandling 1**Lernziele**- Punkte, Linien, Polygone revisited- Eigenschaften von geometrischen Shapes- Shapes modifizieren und kombinieren- Geodaten modifizieren und selektieren Das BeispielGeschäfte in Chicago.Wir checken: In welchen Stadtteilen gibt es keine Lebensmittelläden, wo sind die "Food deserts"- `Boundaries - Census Tracts - 2010.zip`, census tracts in Chicago from [here](https://data.cityofchicago.org/Facilities-Geographic-Boundaries/Boundaries-Census-Tracts-2010/5jrd-6zik)- `Grocery_Stores_-_2013.csv`, grocery stores in Chicago from [here](https://data.cityofchicago.org/Community-Economic-Development/Grocery-Stores-2013/53t8-wyrc)**Credits to:**- http://www.jonathansoma.com/lede/foundations-2017/ Setup
###Code
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point, LineString, Polygon
%matplotlib inline
###Output
_____no_output_____
###Markdown
GeometriesZum Aufwärmen, nochmals ein paar Shapes from scratch PointKreieren Sie einen Punkt an der Koordinate (5, 5): LineZeichnen Sie- eine Linie durch die Punkte (20, 0) und (0, 20)- eine Linie durch die Punkte (15, 0) und (0, 15)- eine Linie durch die Punkte (25, 0) und (0, 25)
###Code
linie1
###Output
_____no_output_____
###Markdown
PolygonZeichnen Sie ein Polygon mit den Eckpunkten (0, 0), (10, 0), (10, 10), (0, 10):
###Code
polygon1
###Output
_____no_output_____
###Markdown
PlottenErstellen Sie ein Dataframe mit einer Spalte "geometry", das die Punkte, Linien und das Polygon enthält Wandeln Sie das dataframe in ein Geodataframe um (Geometriespalte definieren!)
###Code
gdf
###Output
_____no_output_____
###Markdown
Wenn das Geodataframe richtig erstellt wurde, können wir es plotten:
###Code
gdf.plot(alpha=0.5, linewidth=2, edgecolor='black', markersize=5)
###Output
_____no_output_____
###Markdown
Shapes vergleichenWir können geometrische Shapes auf verschiedene Weise miteinander "vergleichen". * **contains:** has the other object TOTALLY INSIDE (boundaries can't touch!!!) "a neighborhood CONTAINS restaurants"* **intersects:** is OVERLAPPING at ALL, unless it's just boundaries touching* **touches:** only the boundaries touch, like a tangent* **within:** is TOTALLY INSIDE of the other object "a restaurant is WITHIN a neighborhood"* **disjoint:** no touching!!! no intersecting!!!!* **crosses:** goes through but isn't inside - "a river crossing through a city"Referenz und weitere Vergleiche: http://geopandas.org/reference.html) Das funktioniert ganz einfach:
###Code
polygon1.contains(punkt1)
punkt1.contains(polygon1)
###Output
_____no_output_____
###Markdown
**Quizfragen:**
###Code
#Liegt der Punkt 1 innerhalb von Polygon 1?
#Berührt die Linie 1 das Polygon 1?
#Überschneidet sich die Linie 3 mit dem Polygon 1?
#Überschneidet sich die Linie 2 mit dem Polygon 1?
#Ist das Polygon 1 völlig losgelöst von der Linie 3?
###Output
_____no_output_____
###Markdown
ImportUnd nun zu unserem eigentlichen Beispiel: **Ein Stadtplan von Chicago mit den Quartieren (census tracts)**Ist bereits als Shapefile vorhanden! Wir können direkt mit Geopandas einlesen.
###Code
tracts = gpd.read_file("dataprojects/Food Deserts/Boundaries - Census Tracts - 2010/geo_export_085dcd7b-113c-4a6d-8d43-5926de1dcc5b.shp")
tracts.head(2)
tracts.plot()
###Output
_____no_output_____
###Markdown
**Eine Liste aller Lebensmittelläden**Ist erst als csv-Liste da. Wir müssen mit Pandas einlesen:
###Code
df = pd.read_csv("dataprojects/Food Deserts/Grocery_Stores_-_2013.csv")
df.head(2)
###Output
_____no_output_____
###Markdown
Um von Pandas zu Geopandas zu gelangen:- Geometrie erstellen- Geodataframe erstellen- Koordinatensystem intialisieren
###Code
points = df.apply(lambda row: Point(row['LONGITUDE'], row['LATITUDE']), axis=1)
grocery_stores = gpd.GeoDataFrame(df, geometry=points)
grocery_stores.crs = {'init': 'epsg:4326'}
grocery_stores.plot()
###Output
_____no_output_____
###Markdown
**Wir plotten mal alles zusammen**
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.plot(ax=ax, color='red', markersize=8, alpha = 0.8)
###Output
_____no_output_____
###Markdown
AnalyseUns interessiert: Wo sind die Gebiete, in denen es in einem bestimmten Umkreis von Metern keine Lebensmittelläden gibt?Um das zu beantworten, müssen wir zuerst in ein brauchbares Koordinatensystem wechseln, das auf Metern basiert. Projektion ändernWir entscheiden uns für eine Variante der Mercator-Projektion.Das ist praktisch, weil:- "Die wichtigste Eigenschaft der Mercator-Projektion ist ihre Winkeltreue. Diese bedeutet auch, dass in kleinen Bereichen der Längenmaßstab in allen Richtungen gleich ist." https://de.wikipedia.org/wiki/Mercator-Projektion- Die Koordinaten sind nicht in Längen-/Breitengrad, sondern in Metern angegeben (die CH-Koordinaten sind auch eine Variante der Mercator-Projektion)
###Code
grocery_stores = grocery_stores.to_crs({'proj': 'merc'})
tracts = tracts.to_crs({'proj': 'merc'})
###Output
_____no_output_____
###Markdown
Andere Projektionen wären:- 'tmerc': transverse mercator- 'aea': albers equal area **Wir haben nun ein neues Koordinatensystem**
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.plot(ax=ax, color='red', markersize=8, alpha = 0.8)
###Output
_____no_output_____
###Markdown
Buffer erstellenWie sieht die Karte aus, wenn wir um jedes Lebensmittelgeschäft einen Kreis von 500 Metern ziehen?
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.buffer(500).plot(ax=ax, color='red', markersize=8, alpha=0.4)
###Output
_____no_output_____
###Markdown
UnionNächster Schritt: Wir fügen alle Punkte zu einer Fläche zusammen
###Code
near_area = grocery_stores.buffer(500).unary_union
###Output
_____no_output_____
###Markdown
Jetzt können wir testen, ob die einzelnen Quartiere diese Fläche berühren
###Code
tracts.disjoint(near_area)
tracts[tracts.disjoint(near_area)].plot()
###Output
_____no_output_____
###Markdown
PlotWir plotten dieselbe Karte wie vorher - und zusätzlich noch jene Tracts, welche die Punktefläche nicht berühren
###Code
#Bisherige Karte
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.buffer(500).plot(ax=ax, color='red', markersize=8, alpha=0.4)
#Neu: Desert-Tracts
tracts[tracts.disjoint(near_area)].plot(ax=ax, color='darkblue', alpha=0.4)
ax.set_title('City tracts that have no grocery store within 500m distance')
###Output
_____no_output_____
###Markdown
Geodatenhandling 2**Inhalt: ** Geopandas für Fortgeschrittene**Nötige Skills**- Basic pandas skills- Funktionen und pandas- Erste Schritte mit Geopandas- Geodatenhandling 1**Lernziele**- Punkte, Linien, Polygone revisited- Eigenschaften von geometrischen Shapes- Shapes modifizieren und kombinieren- Geodaten modifizieren und selektieren Das BeispielGeschäfte in Chicago.Wir checken: In welchen Stadtteilen gibt es keine Lebensmittelläden, wo sind die "Food deserts"- `Boundaries - Census Tracts - 2010.zip`, census tracts in Chicago from [here](https://data.cityofchicago.org/Facilities-Geographic-Boundaries/Boundaries-Census-Tracts-2010/5jrd-6zik)- `Grocery_Stores_-_2013.csv`, grocery stores in Chicago from [here](https://data.cityofchicago.org/Community-Economic-Development/Grocery-Stores-2013/53t8-wyrc)**Credits to:**- http://www.jonathansoma.com/lede/foundations-2017/ Setup
###Code
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point, LineString, Polygon
%matplotlib inline
###Output
_____no_output_____
###Markdown
GeometriesZum Aufwärmen, nochmals ein paar Shapes from scratch Point (again)
###Code
punkt1 = Point(5, 5)
punkt1
###Output
_____no_output_____
###Markdown
Line (again)
###Code
linie1 = LineString([Point(20, 0), Point(0, 20)])
linie1
linie2 = LineString([Point(15, 0), Point(0, 15)])
linie3 = LineString([Point(25, 0), Point(0, 25)])
###Output
_____no_output_____
###Markdown
Polygon (again)
###Code
polygon1 = Polygon([[0, 0], [10, 0], [10, 10], [0, 10]])
polygon1
###Output
_____no_output_____
###Markdown
**Let's plot it together!**
###Code
df = pd.DataFrame({'geometry': [punkt1, linie1, linie2, linie3, polygon1]})
gdf = gpd.GeoDataFrame(df, geometry='geometry')
gdf
gdf.plot(alpha=0.5, linewidth=2, edgecolor='black', markersize=5)
###Output
_____no_output_____
###Markdown
Shapes vergleichenWir können geometrische Shapes auf verschiedene Weise miteinander "vergleichen". * **contains:** has the other object TOTALLY INSIDE (boundaries can't touch!!!) "a neighborhood CONTAINS restaurants"* **intersects:** is OVERLAPPING at ALL, unless it's just boundaries touching* **touches:** only the boundaries touch, like a tangent* **within:** is TOTALLY INSIDE of the other object "a restaurant is WITHIN a neighborhood"* **disjoint:** no touching!!! no intersecting!!!!* **crosses:** goes through but isn't inside - "a river crossing through a city"Referenz und weitere Vergleiche: http://geopandas.org/reference.html) Das funktioniert ganz einfach:
###Code
polygon1.contains(punkt1)
punkt1.contains(polygon1)
###Output
_____no_output_____
###Markdown
**Quizfragen:**
###Code
#Liegt der Punkt 1 innerhalb von Polygon 1?
punkt1.within(polygon1)
#Berührt die Linie 1 das Polygon 1?
linie1.touches(polygon1)
#Überschneidet sich die Linie 3 mit dem Polygon 1?
linie3.intersects(polygon1)
#Überschneidet sich die Linie 2 mit dem Polygon 1?
linie2.intersects(polygon1)
#Ist das Polygon 1 völlig losgelöst von der Linie 3?
polygon1.disjoint(linie3)
###Output
_____no_output_____
###Markdown
ImportUnd nun zu unserem Beispiel: **Ein Stadtplan von Chicago mit den Quartieren (census tracts)**
###Code
tracts = gpd.read_file("dataprojects/Food Deserts/Boundaries - Census Tracts - 2010/geo_export_085dcd7b-113c-4a6d-8d43-5926de1dcc5b.shp")
tracts.head(2)
tracts.plot()
###Output
_____no_output_____
###Markdown
**Eine Liste aller Lebensmittelläden**
###Code
df = pd.read_csv("dataprojects/Food Deserts/Grocery_Stores_-_2013.csv")
df.head(2)
###Output
_____no_output_____
###Markdown
Um von Pandas zu Geopandas zu gelangen:- Geometrie erstellen- Geodataframe erstellen- Koordinatensystem intialisieren
###Code
points = df.apply(lambda row: Point(row.LONGITUDE, row.LATITUDE), axis=1)
grocery_stores = gpd.GeoDataFrame(df, geometry=points)
grocery_stores.crs = {'init': 'epsg:4326'}
grocery_stores.plot()
###Output
_____no_output_____
###Markdown
**Wir plotten mal alles zusammen**
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.plot(ax=ax, color='red', markersize=8, alpha = 0.8)
###Output
_____no_output_____
###Markdown
AnalyseUns interessiert: Wo sind die Gebiete, in denen es in einem bestimmten Umkreis von Metern keine Lebensmittelläden gibt?Um das zu beantworten, müssen wir zuerst in ein brauchbares Koordinatensystem wechseln, das auf Metern basiert. Projektion ändernWir entscheiden uns für eine Variante der Mercator-Projektion.Das ist praktisch, weil:- "Die wichtigste Eigenschaft der Mercator-Projektion ist ihre Winkeltreue. Diese bedeutet auch, dass in kleinen Bereichen der Längenmaßstab in allen Richtungen gleich ist." https://de.wikipedia.org/wiki/Mercator-Projektion- Die Koordinaten sind nicht in Längen-/Breitengrad, sondern in Metern angegeben (die CH-Koordinaten sind auch eine Variante der Mercator-Projektion)
###Code
grocery_stores = grocery_stores.to_crs({'proj': 'merc'})
tracts = tracts.to_crs({'proj': 'merc'})
###Output
_____no_output_____
###Markdown
Andere Projektionen wären:- 'tmerc': transverse mercator- 'aea': albers equal area **Wir haben nun ein neues Koordinatensystem**
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.plot(ax=ax, color='red', markersize=8, alpha = 0.8)
###Output
_____no_output_____
###Markdown
Buffer erstellenWie sieht die Karte aus, wenn wir um jedes Lebensmittelgeschäft einen Kreis von 500 Metern ziehen?
###Code
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.buffer(500).plot(ax=ax, color='red', markersize=8, alpha=0.4)
###Output
_____no_output_____
###Markdown
UnionNächster Schritt: Wir fügen alle Punkte zu einer Fläche zusammen
###Code
near_area = grocery_stores.buffer(500).unary_union
###Output
_____no_output_____
###Markdown
Jetzt können wir testen, ob die einzelnen Quartiere diese Fläche berühren
###Code
tracts.disjoint(near_area)
tracts[tracts.disjoint(near_area)].plot()
###Output
_____no_output_____
###Markdown
PlotWir plotten dieselbe Karte wie vorher - und zusätzlich noch jene Tracts, welche die Punktefläche nicht berühren
###Code
#Bisherige Karte
ax = tracts.plot(figsize=(15,15), color='lightgrey', linewidth=0.25, edgecolor='white')
grocery_stores.buffer(500).plot(ax=ax, color='red', markersize=8, alpha=0.4)
#Neu: Desert-Tracts
tracts[tracts.disjoint(near_area)].plot(ax=ax, color='darkblue', alpha=0.4)
ax.set_title('City tracts that have no grocery store within 500m distance')
###Output
_____no_output_____ |
SnapshotScrapper/SnapshotNetwork.ipynb | ###Markdown
Manipulating the network
###Code
# This is how we can get all the different spaces proopsals and users in the network
spaces = [n for n in G.nodes() if G.nodes[n]['node_type'] == "space"]
proposals = [n for n in G.nodes() if G.nodes[n]['node_type'] == "proposal"]
voters = [n for n in G.nodes() if G.nodes[n]['node_type'] == "user"]
# Here we can sort the list of spaces by the number of proposals, to do this we use the degree of the node
degs = nx.degree(G)
space_deg = [(space, degs[space]) for space in spaces]
space_deg = sorted(space_deg, key=lambda k: k[1], reverse=True)
len([v for v in voters if degs[v] <= 2]) / len(voters)
# Now we can also get all the proposals for a space as well as the users for a space and a proposal
space = space_deg[35][0]
space_childrens = list(nx.ancestors(G, space))
space_proposals = [n for n in space_childrens if G.nodes[n]['node_type'] == "proposal"]
space_voters = [n for n in space_childrens if G.nodes[n]['node_type'] == "user"]
proposal_voters = nx.ancestors(G, space_proposals[0])
print("Space:", space, "Proposals:", len(space_proposals), "Voters:", len(space_voters))
###Output
Space: snapshot.dcl.eth Proposals: 398 Voters: 1429
###Markdown
Projecting the Space network connected by voters
###Code
len(voters)
###Output
_____no_output_____
###Markdown
Projecting the voter network of a DAO
###Code
subG = nx.subgraph(G, space_voters+space_proposals)
usersG = nx.algorithms.bipartite.projected_graph(nx.Graph(subG), space_voters)
users_deg = [(user, degs[user]) for user in usersG]
users_filter = [el[0] for el in users_deg if el[1] > 2]
print("Participated only once:", len(usersG) - len(users_filter), "remaining after filter:", len(users_filter))
usersG = nx.subgraph(usersG, users_filter)
%time partition = community_louvain.best_partition(usersG)
cmap = cm.get_cmap('viridis', max(partition.values()) + 1)
pos=nx.spring_layout(usersG, k=1)
size = [degs[user]*10 for user in usersG]
plt.figure(figsize=(10,10))
nx.draw_networkx_nodes(usersG, pos=pos, node_size=size, cmap=cmap, node_color=list(partition.values()))
nx.draw_networkx_edges(usersG, pos=pos, node_size=size, alpha=0.2)
###Output
_____no_output_____
###Markdown
Question: What is the proportion of voters with "degree=1" (aka vote only once) per space
###Code
sns.set()
filterout = []
for space in spaces:
if degs[space] == 1:
filterout.append(space)
degs = nx.degree(G)
result = []
for space in tqdm.tqdm(spaces):
space_childrens = list(nx.ancestors(G, space))
space_voters = [n for n in space_childrens if G.nodes[n]['node_type'] == "user"]
space_proposals = [n for n in space_childrens if G.nodes[n]['node_type'] == "proposal"]
if len(space_voters) > 0:
onlyonce = [user for user in space_voters if degs[user] == 2]
result.append([space, len(onlyonce), len(space_voters), 1 - (len(space_voters) - len(onlyonce))/len(space_voters), len(space_proposals)])
result = pd.DataFrame(result, columns=["Space", "Single voters", "Voters", "Proportion", "Proposals count"])
plt.figure(figsize=(5,5))
sns.distplot(result['Proportion'], color="black")
plt.title("Proportion of voters voting multiple time\naccross all SnapShot spaces")
plt.xlabel("Proportion of Active Voters")
plt.xlim(0,1)
plt.figure(figsize=(5,5))
sns.scatterplot(data=result, x="Proportion", y="Proposals count", color="black", alpha=0.3)
plt.yscale('log')
plt.title("Proportion of single voters compared\nto proposal counts in a space")
plt.xlabel("Proportion of Active Voters")
result['ProposalNormed'] = result['Proposals count']/result['Voters']
plt.figure(figsize=(5,5))
sns.scatterplot(data=result, x="Proportion", y="ProposalNormed", color="black", alpha=0.3)
plt.yscale('log')
plt.title("Proportion of single voters compared to Normed Proposals \n (proposal counts/num voters) in a space")
plt.xlabel("Proportion of Active Voters")
plt.ylabel("Space Proposal/Members")
###Output
_____no_output_____
###Markdown
With unique proposal spaces filtered out
###Code
degs = nx.degree(G)
result = []
for space in tqdm.tqdm([s for s in spaces if s not in filterout]):
space_childrens = list(nx.ancestors(G, space))
space_voters = [n for n in space_childrens if G.nodes[n]['node_type'] == "user"]
space_proposals = [n for n in space_childrens if G.nodes[n]['node_type'] == "proposal"]
if len(space_voters) > 0:
onlyonce = [user for user in space_voters if degs[user] == 2]
result.append([space, len(onlyonce), len(space_voters), 1 - (len(space_voters) - len(onlyonce))/len(space_voters), len(space_proposals)])
result = pd.DataFrame(result, columns=["Space", "Single voters", "Voters", "Proportion", "Proposals count"])
plt.figure(figsize=(5,5))
sns.distplot(result['Proportion'], color="black")
plt.title("Proportion of voters voting multiple time\naccross all SnapShot spaces")
plt.xlabel("Proportion of Active Voters")
plt.xlim(0,1)
plt.figure(figsize=(5,5))
sns.scatterplot(data=result, x="Proportion", y="Proposals count", color="black", alpha=0.3)
plt.yscale('log')
plt.title("Proportion of single voters compared\nto proposal counts in a space")
plt.xlabel("Proportion of Active Voters")
result['ProposalNormed'] = result['Proposals count']/result['Voters']
plt.figure(figsize=(5,5))
sns.scatterplot(data=result, x="Proportion", y="ProposalNormed", color="black", alpha=0.3)
plt.yscale('log')
plt.title("Proportion of single voters compared to Normed Proposals \n (proposal counts/num voters) in a space")
plt.xlabel("Proportion of Active Voters")
plt.ylabel("Space Proposal/Members")
###Output
_____no_output_____
###Markdown
Some more "classical" datascience use of the data
###Code
space_vote_dist = {}
space_voters = {}
for space in spaces:
proposals = os.listdir(os.path.join(outpath, 'spaces', space['id']))
for proposal in proposals:
with open(os.path.join(outpath, 'spaces', space['id'], proposal)) as f:
data = json.load(f)
votes = data["votes_data"]
del data['votes_data']
for vote in votes:
if vote['space']['id'] not in space_vote_dist:
space_vote_dist[vote['space']['id']] = 0
space_vote_dist[vote['space']['id']] += 1
if vote['space']['id'] not in space_voters:
space_voters[vote['space']['id']] = {}
if not vote['voter'] in space_voters[vote['space']['id']]:
space_voters[vote['space']['id']][vote['voter']] = True
data = []
data2 = []
for k in space_vote_dist:
data.append((k, space_vote_dist[k]))
data2.append((k, len(space_voters[k])))
tmp = sorted(data, key=lambda k: k[1], reverse=True)
X, Y = [], []
for el in tmp:
X.append(el[0])
Y.append(el[1])
plt.plot(range(len(X)), np.log10(Y), '.')
plt.ylabel('Log10(Number of Votes)')
plt.xlabel('Spaces')
plt.title("(Inequal) number of votes per space")
tmp = sorted(data2, key=lambda k: k[1], reverse=True)
X, Y = [], []
for el in tmp:
X.append(el[0])
Y.append(el[1])
plt.plot(range(len(X)), np.log10(Y), '.')
plt.ylabel('Log10(Number of unique voters)')
plt.xlabel('Spaces')
plt.title("(Inequal) number of voters per space")
###Output
_____no_output_____ |
Arctic Heat/ALAMO Analysis/ERDDAP_ALAMO_PAR.ipynb | ###Markdown
Preliminary PAR data was provided as a text file for ALAMO 9119The following cell loads the dataset from an ERDDAP server. It can be accessed like a opendap/thredds server for netcdf, but sends the data in a **streaming** format which is hard to figure out how to parse. So instead, download a temporary file specifying the parameters in the url. Alternatives would be to access streaming version of csv file or other filetype from ERDDAP Server and process via pandas
###Code
cmap = cmocean.cm.solar
temp_filename = "data/9119.ptsPar.txt"
ALAMOID = "http://ferret.pmel.noaa.gov/alamo/erddap/tabledap/arctic_heat_alamo_profiles_9119"
cmap = cmocean.cm.thermal
temp_filename = "data/tmp.nc"
start_date="2017-09-16"
end_date ="2017-12-12"
urllib.urlretrieve(ALAMOID+".ncCFMA?profileid%2CFLOAT_SERIAL_NO%2CCYCLE_NUMBER%2CREFERENCE_DATE_TIME%2CJULD%2Ctime%2Clatitude%2Clongitude%2CPRES%2CTEMP%2CPSAL&time%3E="+start_date+"T23%3A52%3A00Z",temp_filename)
start_date_dt = datetime.datetime.strptime(start_date,"%Y-%m-%d"),
end_date_dt = datetime.datetime.strptime(end_date,"%Y-%m-%d")
#datanc = nc.Dataset('data/tmp.nc') #using netcdf library
datapd = pd.read_csv('data/9119.ptsPar.txt',sep='\s+',names=['id','profile','pressure','temperature','salinity','PAR'])
dataxa = xa.open_dataset('data/tmp.nc')
def plot_PAR():
depth_array = np.arange(0,55,0.25)
temparray = np.ones((dataxa.dims['profile'],len(depth_array)))*np.nan
ProfileTime = []
cycle_col = 0
plt.figure(1, figsize=(18, 3), facecolor='w', edgecolor='w')
plt.subplot(1,1,1)
ax1=plt.gca()
for cycle in range(dataxa['profile'].min(),dataxa['profile'].max()+1,1):
temp_time = dataxa.time[cycle].data[~np.isnat(dataxa.time[cycle].data)]
ProfileTime = ProfileTime + [temp_time]
#remove where pressure may be unknown
Pressure = dataxa.PRES[cycle].data[~np.isnan(dataxa.PRES[cycle].data)]
try:
Temperature = datapd.groupby('profile').get_group(int(dataxa.CYCLE_NUMBER[cycle][0].values)).PAR
except:
Temperature = Pressure * 0 + np.nan
temparray[cycle_col,:] = np.interp(depth_array,np.flip(Pressure,axis=0),np.flip(Temperature,axis=0),
left=np.nan,right=np.nan)
cycle_col +=1
###plot black dots at sample points
#plt.scatter(x=temp_time, y=Pressure,s=1,marker='.', edgecolors='none', c='k', zorder=3, alpha=1)
###plot colored dots at sample points with colorscheme based on variable value
plt.scatter(x=temp_time, y=Pressure,s=30,marker='.', edgecolors='none', c=np.log(Temperature),
vmin=0, vmax=10, cmap=cmocean.cm.solar, zorder=2)
time_array = np.array([x[0] for x in ProfileTime])
plt.contourf(time_array,depth_array,np.log(temparray.T),extend='both',
cmap=cmocean.cm.solar,levels=np.arange(0,10,1),alpha=0.9,zorder=1)
cbar = plt.colorbar()
#plt.contour(time_array,depth_array,temparray.T, colors='#d3d3d3',linewidths=1, alpha=1.0,zorder=3)
ax1.invert_yaxis()
ax1.yaxis.set_minor_locator(ticker.MultipleLocator(5))
ax1.xaxis.set_major_locator(DayLocator(bymonthday=range(0,31,5)))
ax1.xaxis.set_minor_locator(DayLocator(bymonthday=range(1,32,1)))
ax1.xaxis.set_minor_formatter(DateFormatter(''))
ax1.xaxis.set_major_formatter(DateFormatter('%d'))
ax1.xaxis.set_tick_params(which='major', pad=25)
ax1.xaxis.set_tick_params(which='minor', pad=5)
ax1.set_xlim([start_date_dt,end_date_dt])
ax1.set_ylim([50,0])
plot_PAR()
###Output
/Users/bell/anaconda/envs/jupyter/lib/python2.7/site-packages/ipykernel_launcher.py:27: RuntimeWarning: invalid value encountered in log
/Users/bell/anaconda/envs/jupyter/lib/python2.7/site-packages/ipykernel_launcher.py:31: RuntimeWarning: invalid value encountered in log
|
notebooks/losses_evaluation/Dstripes/basic/ell/dense/VAE/DstripesVAE_Dense_reconst_1ell_1sharpdiff.ipynb | ###Markdown
Settings
###Code
%load_ext autoreload
%autoreload 2
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
###Output
2.1.0
###Markdown
Dataset loading
###Code
dataset_name='Dstripes'
images_dir = 'C:\\Users\\Khalid\\Documents\projects\\Dstripes\DS06\\'
validation_percentage = 20
valid_format = 'png'
from training.generators.file_image_generator import create_image_lists, get_generators
imgs_list = create_image_lists(
image_dir=images_dir,
validation_pct=validation_percentage,
valid_imgae_formats=valid_format
)
inputs_shape= image_size=(200, 200, 3)
batch_size = 32//2
latents_dim = 32
intermediate_dim = 50
training_generator, testing_generator = get_generators(
images_list=imgs_list,
image_dir=images_dir,
image_size=image_size,
batch_size=batch_size,
class_mode=None
)
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
###Output
_____no_output_____
###Markdown
Model's Layers definition
###Code
menc_lays = [tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=latents_dim)]
venc_lays = [tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=latents_dim)]
dec_lays = [tf.keras.layers.Dense(units=latents_dim, activation='relu'),
tf.keras.layers.Dense(units=intermediate_dim, activation='relu'),
tf.keras.layers.Dense(units=_outputs_shape),
tf.keras.layers.Reshape(inputs_shape)]
###Output
_____no_output_____
###Markdown
Model definition
###Code
model_name = dataset_name+'VAE_Dense_reconst_1ell_1sharpdiff'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.VAE import VAE as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference_mean',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': menc_lays
},
{
'name': 'inference_logvariance',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': venc_lays
},
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.sharp_difference import prepare_sharpdiff
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood as ell
ae.compile(loss={'x_logits': lambda x_true, x_logits: ell(x_true, x_logits)+similarity_to_distance(prepare_sharpdiff([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})
###Output
Model: "pokemonAE_Dense_reconst_1ell_1ssmi"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
inference_inputs (InputLayer [(None, 200, 200, 3)] 0
_________________________________________________________________
inference (Model) (None, 32) 40961344
_________________________________________________________________
generative (Model) (None, 200, 200, 3) 3962124
_________________________________________________________________
tf_op_layer_x_logits (Tensor [(None, 200, 200, 3)] 0
=================================================================
Total params: 44,923,468
Trainable params: 44,923,398
Non-trainable params: 70
_________________________________________________________________
None
###Markdown
Callbacks
###Code
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
###Output
_____no_output_____
###Markdown
Model Training
###Code
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
###Output
_____no_output_____
###Markdown
Model Evaluation inception_score
###Code
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
###Output
_____no_output_____
###Markdown
Frechet_inception_distance
###Code
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
###Output
_____no_output_____
###Markdown
perceptual_path_length_score
###Code
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
###Output
_____no_output_____
###Markdown
precision score
###Code
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
###Output
_____no_output_____
###Markdown
recall score
###Code
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
###Output
_____no_output_____
###Markdown
Image Generation image reconstruction Training dataset
###Code
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
with Randomness
###Code
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
Complete Randomness
###Code
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
###Output
100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
|
examples/tutorials/translations/português/Parte 03 - Ferramentas Avançadas para Execução Remota.ipynb | ###Markdown
Parte 3: Ferramentas Avançadas para Execução RemotaNa última seção nós utilizamos o conceito de Federated Learning para treinarmos um modelo bem simples. Nós fizemos isto apenas chamando *.send()* e *.get()* em nosso modelo. Ou seja, enviando-o para o local onde se encontram os dados de treinamento, atualizando-o, e então trazendo-o de volta. No entando, ao final do exemplo nós percebemos que precisaríamos ir mais além para proteger a privacidade das pessoas. Isto é, queremos calcular a média dos gradientes **antes** de chamar *.get()*. Dessa forma, nunca veremos o gradiente exato de ninguém (protegendo melhor sua privacidade!!!) Mas, para fazer isso, precisamos de mais algumas coisas:- usar um ponteiro para enviar um Tensor diretamente para outro WorkerAlém disso, enquanto estivermos aqui, vamos aprender sobre algumas operações mais avançadas com tensores, o que nos ajudará tanto com este exemplo quanto com alguns outros no futuro!Autores:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)Tradução:- Jeferson Silva - Twitter: [@jefersonnpn](https://twitter.com/jefersonnpn)
###Code
import torch
import syft as sy
hook = sy.TorchHook(torch)
###Output
_____no_output_____
###Markdown
Seção 3.1 - De ponteiros para ponteirosComo você sabe, os objetos do tipo *PointerTensor* se comportam da mesma forma que tensores normais. De fato, eles são *tão parecidos com tensores* que podemos até ter ponteiros que apontam **para** outros ponteiros. Confira!
###Code
bob = sy.VirtualWorker(hook, id='bob')
alice = sy.VirtualWorker(hook, id='alice')
# este é um tensor local.
x = torch.tensor([1,2,3,4])
x
# enviando o tensor local para Bob
x_ptr = x.send(bob)
# agora temos um apontador/ponteiro
x_ptr
# agora podemos ENVIAR O APONTADOR para alice!!!
pointer_to_x_ptr = x_ptr.send(alice)
pointer_to_x_ptr
###Output
_____no_output_____
###Markdown
Mas o que acabamos de fazer?Bom, no exemplo anterior, nós criamos um tensor chamado `x` e o enviamos para Bob, criando um ponteiro em nossa máquina local (`x_ptr`).Em seguida, nós chamamos `x_ptr.send(alice)` que **envia o ponteiro** para Alice.Note, isso NÃO move os dados! Em vez disso, move o ponteiro, isto é, o apontador para os dados!!
###Code
# Como você pode ver acima, Bob ainda possui os dados reais (os dados sempre são armazenados com tipo LocalTensor).
bob._objects
# Alice, por outro lado, tem x_ptr!! (observe o ponteiro para Bob)
alice._objects
# podemos usar .get() para recuperar x_ptr de Alice
x_ptr = pointer_to_x_ptr.get()
x_ptr
# e então podemos usar x_ptr para recuperar x de Bob!
x = x_ptr.get()
x
###Output
_____no_output_____
###Markdown
Aritmética em Pointero -> Pointero -> Objeto com DadosAssim como nos ponteiros normais, podemos executar quaisquer operações do PyTorch nesses tensores.
###Code
bob._objects
alice._objects
p2p2x = torch.tensor([1,2,3,4,5]).send(bob).send(alice)
y = p2p2x + p2p2x
bob._objects
alice._objects
y.get().get()
bob._objects
alice._objects
p2p2x.get().get()
bob._objects
alice._objects
###Output
_____no_output_____
###Markdown
Seção 3.2 - Operações com tensores em cadeiaNa última seção, sempre que usávamos uma operação *.send() ou *.get()*, chamávamos essas operações diretamente do tensor em nossa máquina local. No entanto, se você tiver uma cadeia de ponteiros, às vezes você deseja chamar essas operações como *.get()* ou *.send()* no **último** ponteiro da cadeia (como enviar dados diretamente de um *worker* para outro). Para conseguirmos isso, você deseja usar funções especialmente projetadas para lidar com operações que devem preservar sua privacidade.Essas operações são:- `meuPonteiro_para_outroPonteiro.mover(outro_worker)`
###Code
# x agora é um ponteiro para um outro ponteiro para os dados que vivem na máquina de Bob
x = torch.tensor([1,2,3,4,5]).send(bob)
print(' bob:', bob._objects)
print('alice:', alice._objects)
x = x.move(alice)
print(' bob:', bob._objects)
print('alice:', alice._objects)
x
###Output
_____no_output_____ |
1. Scrape Google.ipynb | ###Markdown
--- Scrape Google Forever --- Or till you go bankrupt, which ever is earlier. This is the first part of the series and I honestly don't know when it's going to be over. But these parts will work independent from the rest of the series. Let's hope. You start writing a code to scrape google, the first thing you need is a piece of code that can scrape a google. That's pretty basic and that's what you do El Scraper
###Code
# Add basic repos
import requests
import urllib
#import pandas as pd
from requests_html import HTML
from requests_html import HTMLSession
# Get the source code given a url
def get_source(url):
# Given a url it's gonna give you the source code
try:
session = HTMLSession()
response = session.get(url)
return response
except requests.exceptions.RequestException as e:
print(e)
# Parse the query. We are gonna create the queries to be parsed later
def get_results(query):
# When you give a query as the input it returns the sourcecode as response
query = urllib.parse.quote_plus(query)
response = get_source("https://www.google.com/search?q=" + query)
return response
# There is gonna be a lot of noice in it. We are gonna need only link, title and text.
def parse_results(response):
if not response:
return {}
css_identifier_result = ".tF2Cxc"
css_identifier_title = "h3"
css_identifier_link = ".yuRUbf a"
css_identifier_text = ".IsZvec"
results = response.html.find(css_identifier_result)
output = []
for result in results:
title = result.find(css_identifier_title, first=True)
title = title.text if title is not None else ""
link = result.find(css_identifier_link, first=True)
link = link.attrs['href'] if link is not None else ""
text = result.find(css_identifier_text, first=True)
text = text.text if text is not None else ""
item = {
"title": title,
"link": link,
"text": text
}
output.append(item)
return output
# Gonna wrap this all nicely
def google_search(query):
response = get_results(query)
return parse_results(response)
# And we can test it now
# Most of it has empty text. We are gonna change that hopefully
results = google_search("Python web scraping tutorial")
json.dumps(results)
###Output
_____no_output_____
###Markdown
Now that's done you are gonna start scraping the google. Or so you would think .... What about search keyword? Are you gonna use the same keyword over an over again? We are gonna search google forever. You are not gonna manually write keywords forever. So you randomly generate keywords. Let's see if we have got any libraries in python for that. You wanna say hi to Google Dot Com. El Essential Generator Luckily there is a in-built generator that you can use. (Actually there are many generators out there, I just picked the first one I found)
###Code
# You install it first cause probably you don't have it
!pip install essential-generators
from essential_generators import DocumentGenerator
gen = DocumentGenerator()
print(gen.sentence())
###Output
Is required; whether physical, mental, and social updates. This has.
###Markdown
Now you can give this result to the google scraper
###Code
# Now you can give this result to the google scraper
google_search("A casinò. chemical substitute")
###Output
_____no_output_____
###Markdown
Google has a say about everything. Almost. Let's wrap this baby real nice.
###Code
def fetch_google_results():
gen = DocumentGenerator()
key = gen.sentence()
return google_search(key)
fetch_google_results()
###Output
_____no_output_____
###Markdown
I think we can wrap more things in it. Let's see. We can save the results to a file. Google has a say about everything. Almost. Now we are not just gonna print the result. We need to save it somewhere in some format. We can do anything here. Got to a sql or nosql database. Just store as a csv or a json. Let's do json. The result is a dict anyway.
###Code
import json
def fetch_google_results():
gen = DocumentGenerator()
key = gen.sentence()
result_dict = google_search(key)
json_object = json.dumps(result_dict)
return json_object
s = fetch_google_results()
s
###Output
_____no_output_____
###Markdown
Where you wanna store it? Let's save it as a json file
###Code
import json
def save_google_results_as_json():
gen = DocumentGenerator()
key = gen.sentence()
result_dict = google_search(key)
filename = key + ".json"
with open(filename, "w") as json_saver:
json.dump(result_dict, json_saver, indent=4, sort_keys=True)
# we are gonna save the pretty print version of json using the indent =4 param even though no human will
# read it eventually
###Output
_____no_output_____
###Markdown
Let's test it
###Code
save_google_results_as_json()
###Output
_____no_output_____
###Markdown
I have a beautiful json in the current folder named "Hosted UEFA however, coincided with many of the Isthmus of Suez and.json". We are gonna royally ignore the keywords Essential generator generates for now. It's doing what we asked of it. Now we just need to keep doing it. Easy peasy Japaneasy! "Let's scrape google forever" he said.
###Code
while True:
# That's an infinite loop right there in it's raw form. Don't do this unless you know for sure your code won't run
save_google_results_as_json()
# That is a stupid file name. Let's correct those
# Special mention : https://github.com/django/django/blob/main/django/utils/text.py
import re
def get_valid_filename(s):
s = str(s).strip().replace(' ', '_')
return re.sub(r'(?u)[^-\w.]', '', s)
import json
def save_google_results_as_json():
gen = DocumentGenerator()
key = gen.sentence()
result_dict = google_search(key)
filename = "Scraped/" + get_valid_filename(key[:40]) + ".json"
with open(filename, "w") as json_saver:
json.dump(result_dict, json_saver, indent=4, sort_keys=True)
# we are gonna save the pretty print version of json using the indent =4 param even though no human will
# read it eventually
###Output
_____no_output_____
###Markdown
"Let's scrape google forever" he said again.
###Code
while True:
# That's an infinite loop right there in it's raw form. Don't do this unless you know for sure your code won't run
save_google_results_as_json()
###Output
_____no_output_____ |
soft_topk/demo.ipynb | ###Markdown
''' IntroductionThis is a demo of the SOFT top-k operator (https://arxiv.org/pdf/2002.06504.pdf). We demostrate the usage of the provided `Topk_custom` module in the forward and the backward pass.'''
###Code
'''
# Set up
'''
import torch
from torch.nn.parameter import Parameter
import numpy as np
import soft
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)
torch.manual_seed(1)
num_iter = int(1e2)
k = 3
epsilon=5e-2 # larger epsilon lead to smoother relaxation, and requires less num_iter
soft_topk = soft.TopK_custom(k, epsilon=epsilon, max_iter=num_iter)
'''
# Input the scores
'''
scores = [5,2,3,4,1,6] #input the scores here
scores_tensor = Parameter(torch.FloatTensor([scores]))
print('======scores======')
print(scores)
'''
# Forward pass.
The goal of the forward pass is to identify the scores that belongs to top-k.
The `soft_topk` object returns a smoothed indicator function: The entries are close to 1 for top-k scores, and close to 0 for non-top-k scores.
The smoothness is controled by hyper-parameter `epsilon`.
'''
A = soft_topk(scores_tensor)
indicator_vector = A.data.numpy()
print('======topk scores======')
print(indicator_vector[0,:])
plt.imshow(indicator_vector, cmap='Greys')
# plt.axis('off')
plt.yticks([])
plt.xticks(range(len(scores)), scores)
plt.colorbar()
plt.show()
'''
# Backward Pass
The goal of training is to push the scores that should have been top-k to really be top-k.
For example, in neural kNN, we want to push the scores with the same labels as the query sample to be top-k.
In this demo, we mimick the loss function of neural kNN.
`picked` is the scores ids with the same label as the query sample. Our training goal is to push them to be top-k.
'''
picked = [1,2,3]
loss = 0
for item in picked:
loss += A[0,item]
loss.backward()
A_grad = scores_tensor.grad.clone()
print('======w.r.t score grad======')
print(A_grad.data.numpy())
'''
# Visualization of the Grad
'''
x = scores
grad = A_grad.numpy()[0,:]
grad = grad/np.linalg.norm(grad)
plt.figure(figsize=(len(scores),5))
plt.scatter(range(len(x)), x)
picked_scores = [x[item] for item in picked]
plt.scatter(picked, picked_scores, label='scores we want to push to smallest top-k')
for i, item in enumerate(x):
plt.arrow(i, item, 0, grad[i], head_width=abs(grad[i])/4, fc='k')
plt.xticks(range(len(x)), x)
plt.yticks([])
plt.xlim([-0.5, len(scores)-0.5])
plt.ylim([min(scores)-1, max(scores)+1])
plt.legend()
plt.show()
# clear the grad before rerun the forward-backward code
scores_tensor.grad.data.zero_()
###Output
_____no_output_____
###Markdown
''' IntroductionThis is a demo of the SOFT top-k operator (https://arxiv.org/pdf/2002.06504.pdf). We demostrate the usage of the provided `Topk_custom` module in the forward and the backward pass.'''
###Code
'''
# Set up
'''
import torch
from torch.nn.parameter import Parameter
import numpy as np
import soft_ot as soft
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)
torch.manual_seed(1)
num_iter = int(1e2)
k = 3
epsilon=5e-2 # larger epsilon lead to smoother relaxation, and requires less num_iter
soft_topk = soft.TopK_custom(k, epsilon=epsilon, max_iter=num_iter)
'''
# Input the scores
'''
scores = [5,2,3,4,1,6] #input the scores here
scores_tensor = Parameter(torch.FloatTensor([scores]))
print('======scores======')
print(scores)
'''
# Forward pass.
The goal of the forward pass is to identify the scores that belongs to top-k.
The `soft_topk` object returns a smoothed indicator function: The entries are close to 1 for top-k scores, and close to 0 for non-top-k scores.
The smoothness is controled by hyper-parameter `epsilon`.
'''
A = soft_topk(scores_tensor)
indicator_vector = A.data.numpy()
print('======topk scores======')
print(indicator_vector[0,:])
plt.imshow(indicator_vector, cmap='Greys')
# plt.axis('off')
plt.yticks([])
plt.xticks(range(len(scores)), scores)
plt.colorbar()
plt.show()
'''
# Backward Pass
The goal of training is to push the scores that should have been top-k to really be top-k.
For example, in neural kNN, we want to push the scores with the same labels as the query sample to be top-k.
In this demo, we mimick the loss function of neural kNN.
`picked` is the scores ids with the same label as the query sample. Our training goal is to push them to be top-k.
'''
picked = [1,2,3]
loss = 0
for item in picked:
loss += A[0,item]
loss.backward()
A_grad = scores_tensor.grad.clone()
print('======w.r.t score grad======')
print(A_grad.data.numpy())
'''
# Visualization of the Grad
'''
x = scores
grad = A_grad.numpy()[0,:]
grad = grad/np.linalg.norm(grad)
plt.figure(figsize=(len(scores),5))
plt.scatter(range(len(x)), x)
picked_scores = [x[item] for item in picked]
plt.scatter(picked, picked_scores, label='scores we want to push to smallest top-k')
for i, item in enumerate(x):
plt.arrow(i, item, 0, grad[i], head_width=abs(grad[i])/4, fc='k')
plt.xticks(range(len(x)), x)
plt.yticks([])
plt.xlim([-0.5, len(scores)-0.5])
plt.ylim([min(scores)-1, max(scores)+1])
plt.legend()
plt.show()
# clear the grad before rerun the forward-backward code
scores_tensor.grad.data.zero_()
###Output
_____no_output_____ |
Investigating Happiness/Investigating Happiness Template.ipynb | ###Markdown
**Introduction**: * In this project, you'll build a Model to determine the most important factors in day-to-day life that makes a person happy. We've provided some of the code, but left most of the part for you. After you've submitted this project, feel free to explore the data and the model more.* The World Happiness Report is an annual publication of the United Nations Sustainable Development Solutions Network. It contains articles, and rankings of national happiness based on respondent ratings of their own lives,which the report also correlates with various life factors* The rankings of national happiness are based on a Cantril ladder survey. Nationally representative samples of respondents are asked to think of a ladder, with the best possible life for them being a 10, and the worst possible life being a 0. They are then asked to rate their own current lives on that 0 to 10 scale. The report correlates the results with various life factors.* In the reports, experts in fields including economics, psychology, survey analysis, and national statistics, describe how measurements of well-being can be used effectively to assess the progress of nations, and other topics. Each report is organized by chapters that delve deeper into issues relating to happiness, including mental illness, the objective benefits of happiness, the importance of ethics, policy implications, and links with the Organisation for Economic Co-operation and Development's (OECD) approach to measuring subjective well-being and other international and national efforts. ------------------------------------------------------------------------------------------------------------------------------------------------------------------- Is this GDP per capita which makes you happy ?<img src="https://i.pinimg.com/originals/35/da/23/35da236b480636ec8ffee367281fe1b1.gif" width="700" height="300" /> Is this Perception of Corruption about Goverment, which make you sad?<img src="https://media.tenor.com/images/50c6b91a0384dcc0c715abe9326789cd/tenor.gif" width="700" height="400" /> Is this Freedom of Life Choises which makes you happy ?<img src="https://media0.giphy.com/media/OmAdpbVnAAWJO/giphy.gif"width="700" height="400" /> Let us explore the factor of happiness.<img src="https://media1.giphy.com/media/1rKFURpStAa8VOiBLg/giphy.gif" width="700" height="400" /> Description In this project you are going to explore and explain the **relationship** between **happiness score** and other variable like **GDP per Capita**, **Life Expectancy**, **Freedom** etc. DatasetThe dataset which you are going to use is the world happiness report data. ContextThe World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness. ContentThe happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others. InspirationWhat countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness? What is Dystopia?Dystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia. What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country. The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer.If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores. Analyzing Happiness around the Globe. Lets Start Importing some useful libraries
###Code
# for some basic operations
import numpy as np
import pandas as pd
# for visualizations
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
# for interactive visualizations
import plotly.offline as py
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
init_notebook_mode(connected = True)
from bubbly.bubbly import bubbleplot
# for model interpretations
import eli5
from eli5.sklearn import PermutationImportance
from lightgbm import LGBMRegressor
###Output
_____no_output_____
###Markdown
Reading the datasetIn this model you are going to use data for only 2015, 2016 and 2017. And you will use this data interchangably to avoid repeating some parts. Later on after going through this model you can explore data for 2018 and 2019 yourself.
###Code
# mention the datapath
data_path_2015=''
data_path_2016=''
data_path_2017=''
data_2015 = pd.read_csv("data_path_2015")
data_2016 = pd.read_csv("data_path_2016")
data_2017 = pd.read_csv("data_path_2017")
###Output
_____no_output_____
###Markdown
Understanding the dataA critical step in working with machine learning models is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
###Code
data_2015.head()
data_2016.head()
data_2017.head()
data_2015.describe()
data_2016.describe()
data_2017.describe()
###Output
_____no_output_____
###Markdown
Since now you have seen that data have not changed considerably so you will use all 3 years data interchangably throughout this model Exploratory Data Analysis 1. Bi-variate data analysis* Bivariate analysis is the simultaneous analysis of two variables (attributes). It explores the concept of relationship between two variables, whether there exists an association and the strength of this association, or whether there are differences between two variables and the significance of these differences. There are three types of bivariate analysis. >Numerical & Numerical- It is performed when the variables to be analyzed are both numerical.>Categorical & Categorical- It is performed when the variables to be analyzed are both categorical.>Numerical & Categorical- It is performed when one of the variables to be analyzed is numerical and other is categorical.Now lets make a violin plot between "Happiness Score" and "Region" to see how the happiness of people vary across different parts of the world.
###Code
# Make a violin plot for any year data
# happiness score vs continents
## Start code
## end code
###Output
_____no_output_____
###Markdown
Multi-Variate Analysis* Multivariate analysis (MVA) is based on the principles of multivariate statistics, which involves observation and analysis of more than one statistical outcome variable at a time. Typically, MVA is used to address the situations where multiple measurements are made on each experimental unit and the relations among these measurements and their structures are important.* Essentially, multivariate analysis is a tool to find patterns and relationships between several variables simultaneously. It lets us predict the effect a change in one variable will have on other variables. ... This gives multivariate analysis a decisive advantage over other forms of analysis. Correlations Between the Data**What is a heatmap??*** A heatmap is a two-dimensional graphical representation of data where the individual values that are contained in a matrix are represented as colors. The seaborn python package allows the creation of annotated heatmaps which can be tweaked using Matplotlib tools as per the creator's requirement.* Image below is of a heatmap.<img src="https://d1rwhvwstyk9gu.cloudfront.net/2017/07/seaburn-2.png" width="800px">Lets find some top correlating features of this dataset by making a heatmap using Seaborn Library.
###Code
# Make a heatmap for any year data
## start data
## end data
###Output
_____no_output_____
###Markdown
> In the above Heat Map you can see that Happiness Score is very highly correlated with Economy, Health, and Family Satisfaction and somewhat related with Freedom also but has very low relation with Trust in Government in average case. Lets analyze these heatmap region wise to get a better insight.Code for making one of the heatmap is provided below and others are left for you to write their code. 1. Correlations for Western Europe
###Code
plt.rcParams['figure.figsize'] = (20, 15)
d = data_2016.loc[lambda data_2016: data_2016['Region'] == 'Western Europe']
sns.heatmap(d.corr(), cmap = 'Wistia', annot = True)
plt.show()
###Output
_____no_output_____
###Markdown
> You will notice that the Heat Map particularly for Europe has one more thing to add apart from Family Satisfaction, Freedom, Economy, Generosity, It is also highly correlated with Trust in Government.> The European Region is the Happiest Region so far. 2. Correlations for Eastern Asia
###Code
# Make a heatmap for 2016 year data for Eastern Asia region only
## start data
## end data
###Output
_____no_output_____
###Markdown
> You have noticed that, the situation gets worsened as the Correlation is negative for many important factors such as Economy, Health, Trust in Government which makes the situation very critical. It has Positive correlations only with Freedom, Generosity and Famlily Satisfaction. 3. North America
###Code
# Make a heatmap for 2016 year data for Northern America region only
## start data
## end data
###Output
_____no_output_____
###Markdown
>You will notice that everything is highly correlated to the Happiness in America. Amongst so many countries of the world. Being a very large country also America is still able to keep their people happy. America stands at position number 10 amongst the Happiness Rankings for the World. 4. Middle East and Northern Africa
###Code
# Make a heatmap for 2016 year data for Middle East and Northern Africa region only
## start data
## end data
###Output
_____no_output_____
###Markdown
> In the above section you have noticed that the correlations are quite goood with almost all the important factors being highly correlated with Happiness. Family Satisfaction is the most important factor as it is the most important factor for happiness n this region. 5. Sub-Saharan Africa
###Code
# Make a heatmap for 2016 year data for Sub-Saharan Africa region only
## start data
## end data
###Output
_____no_output_____
###Markdown
> The Situations are very bad for Sub-Saharan Region as it is the unhappiest region in the world. The correlations with Happiness Score are very low for features such as Generosity, Family Satisfaction, Freedom etc. Almost all of the features are having less than 0.5 correlation which is very bad.> You can also make more heatmaps for remaining regions, its up to you. Bubble Charts * A bubble chart (aka bubble plot) is an extension of the scatter plot used to look at relationships between three numeric variables. Each dot in a bubble chart corresponds with a single data point, and the variables' values for each point are indicated by horizontal position, vertical position, and dot size. * Like the scatter plot, a bubble chart is primarily used to depict and show relationships between numeric variables. However, the addition of marker size as a dimension allows for the comparison between three variables rather than just two.Lets make some bubble charts to analyze the type of relationship among various features of the dataset.Code for making one bubble chart is provided below to explain you its implementation and others are left for you to write their code at your own.
###Code
# Happiness vs Generosity vs Economy
import warnings
warnings.filterwarnings('ignore')
figure = bubbleplot(dataset = data_2015, x_column = 'Happiness Score', y_column = 'Generosity',
bubble_column = 'Country', size_column = 'Economy (GDP per Capita)', color_column = 'Region',
x_title = "Happiness Score", y_title = "Generosity", title = 'Happiness vs Generosity vs Economy',
x_logscale = False, scale_bubble = 1, height = 650)
py.iplot(figure, config={'scrollzoom': True})
# Make a bubble chart using 2015 year data
# Happiness vs Trust vs Economy
## start code
## end code
# Make a bubble chart using 2016 year data
# Happiness vs Health vs Economy
## start code
## end code
# Make a bubble chart using 2015 year data
# Happiness vs Family vs Economy
## start code
## end code
###Output
_____no_output_____
###Markdown
> Bubble plot to depict the relation between the Happiness Scores vs Family Satisfaction where size of the bubbles is represented by the Economy and the color of the bubbles is represented by the Different Regions of the World.* It is Quite Visible that as the Family Satisfaction ratings increases the Happiness Score increases. * Also, European Countries and Austrelia are the Happiest Regions. After America.* There is not even a single country in American Region with low Happiness Index.* Asian and African countries suffer with some serious issues, that is why none of the Asian orr African Country stands at a good position in terms of Happiness Index.* Some Countries in Middle East are Happy while some are Unhappy. Bullet Chart* A bullet graph is a variation of a bar graph developed to replace dashboard gauges and meters. A bullet graph is useful for comparing the performance of a primary measure to one or more other measures.* Bullet charts came into existence to overcome the drawbacks of Gauge charts. We can refer to them as Liner Gauge charts.Lets make a bullet Chart to Represent the Range for some of the most Important Attributes given in the data.
###Code
import plotly.figure_factory as ff
data = (
{"label": "Happiness", "sublabel":"score",
"range": [5, 6, 8], "performance": [5.5, 6.5], "point": [7]},
{"label": "Economy", "sublabel": "score", "range": [0, 1, 2],
"performance": [1, 1.5], "sublabel":"score","point": [1.5]},
{"label": "Family","sublabel":"score", "range": [0, 1, 2],
"performance": [1, 1.5],"sublabel":"score", "point": [1.3]},
{"label": "Freedom","sublabel":"score", "range": [0, 0.3, 0.6],
"performance": [0.3, 0.4],"sublabel":"score", "point": [0.5]},
{"label": "Trust", "sublabel":"score","range": [0, 0.2, 0.5],
"performance": [0.3, 0.4], "point": [0.4]}
)
fig = ff.create_bullet(
data, titles='label', subtitles='sublabel', markers='point',
measures='performance', ranges='range', orientation='v',
)
py.iplot(fig, filename='bullet chart from dict')
###Output
_____no_output_____
###Markdown
> Bullet Chart to Represent the Range for some of the most Important Attributes given in the data. We have taken Happiness, Economy, Freedom, and Family. for analysis of their range.* If the values for the given attributes lie in the Dark Blue Region then it is in the critical region.* If the values for the given attributes lie in the light blue region then is is in good condition.* If the values for the given attributes lie above or near the diamond then is in the best state or condition.* White Regions are depicting the Maxima that could be achieved.You can make more of Bullet Charts for better visualizations. Pie Chart
###Code
# Make a pie chart pie that depicts the Number of Countries from each Region
## start code
## end code
###Output
_____no_output_____
###Markdown
> The Above pie chart depicts the Number of Countries from each Region, * There are only two countries from North America(USA and Canada), and Austrelia(Austrelia, and New Zealand) Regions.* The highest number of countries are from Sub-Saharan and Central and Eastern Europe Regions with 40 and 29 countries respectively. Chloropleth Maps* A Choropleth Map is a map composed of colored polygons. It is used to represent spatial variations of a quantity. This page documents how to build outline choropleth maps, but you can also build choropleth tile maps using our Mapbox trace types.* Making choropleth maps requires two main types of input:1. Geometry information:>>This can either be a supplied GeoJSON file where each feature has either an id field or some identifying value in properties; or>>one of the built-in geometries within plotly: US states and world countries 2. A list of values indexed by feature identifier.>>The GeoJSON data is passed to the geojson argument, and the data is passed into the color argument of px.choropleth (z if using graph_objects), in the same order as the IDs are passed into the location argument.* Note the geojson attribute can also be the URL to a GeoJSON file, which can speed up map rendering in certain cases.Lets make some chloropleth maps to get a better insight in relationshop between "Country" and other features.Code for making one Chloropleth map is provided below to explain you its implementation and others are left for you to write their code at your own. Country vs Generosity
###Code
trace1 = [go.Choropleth(
colorscale = 'Earth',
locationmode = 'country names',
locations = data_2017['Country'],
text = data_2017['Country'],
z = data_2017['Generosity'],
)]
layout = dict(title = 'Generosity',
geo = dict(
showframe = True,
showocean = True,
showlakes = True,
showcoastlines = True,
projection = dict(
type = 'hammer'
)))
projections = [ "equirectangular", "mercator", "orthographic", "natural earth","kavrayskiy7",
"miller", "robinson", "eckert4", "azimuthal equal area","azimuthal equidistant",
"conic equal area", "conic conformal", "conic equidistant", "gnomonic", "stereographic",
"mollweide", "hammer", "transverse mercator", "albers usa", "winkel tripel" ]
buttons = [dict(args = ['geo.projection.type', y],
label = y, method = 'relayout') for y in projections]
annot = list([ dict( x=0.1, y=0.8, text='Projection', yanchor='bottom',
xref='paper', xanchor='right', showarrow=False )])
# Update Layout Object
layout[ 'updatemenus' ] = list([ dict( x=0.1, y=0.8, buttons=buttons, yanchor='top' )])
layout[ 'annotations' ] = annot
fig = go.Figure(data = trace1, layout = layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
* You will notice that Generosity is really low in Big Countries like Russia, China, and India but USA, Austrelia, and Canada has high ratings for generosity.* Africa and South America have very low Generosity Scores in general. Top 10 Most Generous Countries
###Code
data_2017[['Country', 'Generosity']].sort_values(by = 'Generosity',
ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Country vs Trust in Government (Corruption)
###Code
# Make a chloropleth map for depicting Country vs Trust in Government (Corruption) for 2017 year data
## start code
## end code
###Output
_____no_output_____
###Markdown
* You will notice that there is no trust in government all round the world, except Norway, sweden, and Finland.* Canada Saudi Arabia, Germany, United Kingdom, Somalia, Ireland, and Austrlia are also having good trust in the Governance of their Countries. Top 10 Countries with Trust in Government
###Code
data_2017[['Country', 'Trust..Government.Corruption.']].sort_values(by = 'Trust..Government.Corruption.',
ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Country vs Family Satisfaction Index
###Code
# Make a chloropleth map for depicting Country vs Family Satisfaction Index for 2017 year data
## start code
## end code
###Output
_____no_output_____
###Markdown
* You will notice that India and China in particular have very low Family Satisfaction comparatively. * All over the world Family Satisfaction rate is really good.* Central African Republic has the lowest Family Satisfaction score in the world. Top 10 Countries in Family Satisfaction
###Code
data_2017[['Country', 'Family']].sort_values(by = 'Family', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Country vs Economy (GDP per Capita)
###Code
# Make a chloropleth map for depicting Country vs Economy (GDP per Capita) for 2017 year data
## start code
## end code
###Output
_____no_output_____
###Markdown
* You will notice that America, Canada, Austrelia, Saudi Arabia, European Countries are the Leaders in the Economy and GDP. * Smaller Countries like Norway, Qatar, Luxembourg are having the best GDP rate in the world.* Most of the African Countries are having very low GDP rate.* India, Pakistan, Myanmar are having very low GDP in the Asian Regions. Top 10 Countries with Best Economy
###Code
data_2017[['Country', 'Economy..GDP.per.Capita.']].sort_values(by = 'Economy..GDP.per.Capita.',
ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Country vs Freedom
###Code
# Make a chloropleth map for depicting Country vs Freedom for 2017 year data
## start code
## end code
###Output
_____no_output_____
###Markdown
* Looks like Canada, Austrelia and Europe are the best places on Earth to live. They have high scores throughout. In this case also They are the Winners.* Europe on Whole has a very high Freedom Index in comparison to other Coutries of the World.* African Countries such as Sudan and Angola are having the lowsest Freedom Index in the World. Top 10 Most Freedom Oriented Countries
###Code
data_2017[['Country', 'Freedom']].sort_values(by = 'Freedom', ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Model Building* For this project your main task is to determine whih factors are most important for Happiness in people.* So for this you will not build a model as such but apply LGBMReggressor Permutaion Importance only to determine most important factors for happiness.
###Code
lgbm = LGBMRegressor(n_estimators=5000)
indData = data_2016.loc[:,"Economy (GDP per Capita)":"Generosity"]
depData = data_2016.pop("Happiness Score")
lgbm.fit(indData, depData)
columns = indData.columns.to_list()
perm = PermutationImportance(lgbm, random_state=10).fit(indData, depData)
eli5.show_weights(perm, feature_names = columns)
###Output
_____no_output_____
###Markdown
**Finding from Permutation Importance**- Yow will come to know that GDP per capita is having highest impact on Happiness Score - Perception of Goverment Corruption and Generosity is having least impact on Happiness Score. At Last Country vs Happiness Rank
###Code
trace1 = [go.Choropleth(
colorscale = 'Electric',
locationmode = 'country names',
locations = data_2015['Country'],
text = data_2015['Country'],
z = data_2015['Happiness Rank'],
)]
layout = dict(title = 'Happiness Rank',
geo = dict(
showframe = True,
showocean = True,
showlakes = True,
showcoastlines = True,
projection = dict(
type = 'hammer'
)))
projections = [ "equirectangular", "mercator", "orthographic", "natural earth","kavrayskiy7",
"miller", "robinson", "eckert4", "azimuthal equal area","azimuthal equidistant",
"conic equal area", "conic conformal", "conic equidistant", "gnomonic", "stereographic",
"mollweide", "hammer", "transverse mercator", "albers usa", "winkel tripel" ]
buttons = [dict(args = ['geo.projection.type', y],
label = y, method = 'relayout') for y in projections]
annot = list([ dict( x=0.1, y=0.8, text='Projection', yanchor='bottom',
xref='paper', xanchor='right', showarrow=False )])
# Update Layout Object
layout[ 'updatemenus' ] = list([ dict( x=0.1, y=0.8, buttons=buttons, yanchor='top' )])
layout[ 'annotations' ] = annot
fig = go.Figure(data = trace1, layout = layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
**Finding from Permutation Importance**- GDP per capita is having highest impact on Happiness Score - Perception of Goverment Corruption and Generosity is having least impact on Happiness Score. Top 10 Happiest Countries
###Code
data_2016[['Country', 'Happiness Rank']].sort_values(by = 'Happiness Rank', ascending = True).head(10)
###Output
_____no_output_____
###Markdown
Conclusions
###Code
## write your conclusions here about which factors you found most important for happiness.
###Output
_____no_output_____ |
Web_App_Launcher.ipynb | ###Markdown
Installing EfficientNet
###Code
!pip install -q efficientnet
import efficientnet.tfkeras as efn
###Output
[?25l
[K |██████▌ | 10kB 29.5MB/s eta 0:00:01
[K |█████████████ | 20kB 11.5MB/s eta 0:00:01
[K |███████████████████▍ | 30kB 9.0MB/s eta 0:00:01
[K |█████████████████████████▉ | 40kB 9.7MB/s eta 0:00:01
[K |████████████████████████████████| 51kB 4.9MB/s
[?25h
###Markdown
Installing Streamlit for Web App
###Code
!pip install streamlit
#Installing necessary libraries
!pip install gevent
# Upgrading IpyKernel
!pip install -U ipykernel
# Insatlling Pyngrok for generating public url token
!pip install pyngrok
# Getting the necessary files for Ngrok to work
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -qq ngrok-stable-linux-amd64.zip
#Getting the public URL
get_ipython().system_raw('./ngrok http 8501 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
#Running the streamlit server
!streamlit run appstreamlit.py
###Output
[0m
[34m[1m You can now view your Streamlit app in your browser.[0m
[0m
[34m Network URL: [0m[1mhttp://172.28.0.2:8501[0m
[34m External URL: [0m[1mhttp://34.90.191.121:8501[0m
[0m
2020-10-08 11:18:39.529215: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:40.690562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-08 11:18:40.745144: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:40.745763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2020-10-08 11:18:40.745825: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:40.982596: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-08 11:18:41.120079: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-08 11:18:41.146072: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-08 11:18:41.447745: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-08 11:18:41.482064: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-08 11:18:41.993837: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-08 11:18:41.994021: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:41.994659: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:41.995206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-08 11:18:41.996089: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX512F
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-08 11:18:42.036895: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2000175000 Hz
2020-10-08 11:18:42.037128: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19b39c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-08 11:18:42.037171: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-08 11:18:42.168695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.169348: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19b3b80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-08 11:18:42.169390: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0
2020-10-08 11:18:42.171080: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.171642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2020-10-08 11:18:42.171747: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:42.171801: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-08 11:18:42.171828: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-08 11:18:42.171859: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-08 11:18:42.171883: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-08 11:18:42.171906: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-08 11:18:42.171929: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-08 11:18:42.172014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.172616: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.173157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-08 11:18:42.177751: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:46.200340: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-08 11:18:46.200395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-08 11:18:46.200407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-08 11:18:46.204864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:46.205506: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:46.206044: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-10-08 11:18:46.206106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14968 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
2020-10-08 11:20:01.091249: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-08 11:20:02.529750: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
[34m Stopping...[0m
###Markdown
Installing EfficientNet
###Code
!pip install -q efficientnet
import efficientnet.tfkeras as efn
###Output
[?25l
[K |██████▌ | 10kB 29.5MB/s eta 0:00:01
[K |█████████████ | 20kB 11.5MB/s eta 0:00:01
[K |███████████████████▍ | 30kB 9.0MB/s eta 0:00:01
[K |█████████████████████████▉ | 40kB 9.7MB/s eta 0:00:01
[K |████████████████████████████████| 51kB 4.9MB/s
[?25h
###Markdown
Installing Streamlit for Web App
###Code
!pip install streamlit
#Installing necessary libraries
!pip install gevent
# Upgrading IpyKernel
!pip install -U ipykernel
# Insatlling Pyngrok for generating public url token
!pip install pyngrok
# Getting the necessary files for Ngrok to work
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -qq ngrok-stable-linux-amd64.zip
#Getting the public URL
get_ipython().system_raw('./ngrok http 8501 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
#Running the streamlit server
!streamlit run appstreamlit.py
###Output
[0m
[34m[1m You can now view your Streamlit app in your browser.[0m
[0m
[34m Network URL: [0m[1mhttp://172.28.0.2:8501[0m
[34m External URL: [0m[1mhttp://34.90.191.121:8501[0m
[0m
2020-10-08 11:18:39.529215: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:40.690562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-08 11:18:40.745144: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:40.745763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2020-10-08 11:18:40.745825: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:40.982596: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-08 11:18:41.120079: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-08 11:18:41.146072: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-08 11:18:41.447745: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-08 11:18:41.482064: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-08 11:18:41.993837: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-08 11:18:41.994021: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:41.994659: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:41.995206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-08 11:18:41.996089: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX512F
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-08 11:18:42.036895: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2000175000 Hz
2020-10-08 11:18:42.037128: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19b39c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-08 11:18:42.037171: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-08 11:18:42.168695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.169348: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19b3b80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-08 11:18:42.169390: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0
2020-10-08 11:18:42.171080: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.171642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:00:04.0 name: Tesla P100-PCIE-16GB computeCapability: 6.0
coreClock: 1.3285GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2020-10-08 11:18:42.171747: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:42.171801: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-08 11:18:42.171828: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-08 11:18:42.171859: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-08 11:18:42.171883: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-08 11:18:42.171906: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-08 11:18:42.171929: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-08 11:18:42.172014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.172616: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:42.173157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-08 11:18:42.177751: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-08 11:18:46.200340: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-08 11:18:46.200395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-08 11:18:46.200407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-08 11:18:46.204864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:46.205506: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-08 11:18:46.206044: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-10-08 11:18:46.206106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14968 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
2020-10-08 11:20:01.091249: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-08 11:20:02.529750: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
[34m Stopping...[0m
|
notebooks/dataset-projections/plot-example-data.ipynb | ###Markdown
load datasets
###Code
datasets = {}
###Output
_____no_output_____
###Markdown
MNIST
###Code
from tensorflow.keras.datasets import mnist
# load dataset
(train_images, Y_train), (test_images, Y_test) = mnist.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
print(len(X_train), len(X_valid), len(X_test))
datasets['mnist'] = {'X':{}, 'Y':{}}
datasets['mnist']['X']['train'] = X_train
datasets['mnist']['X']['valid'] = X_valid
datasets['mnist']['X']['test'] = X_test
datasets['mnist']['Y']['train'] = Y_train
datasets['mnist']['Y']['valid'] = Y_valid
datasets['mnist']['Y']['test'] = Y_test
###Output
_____no_output_____
###Markdown
cifar10
###Code
from tensorflow.keras.datasets import cifar10
# load dataset
(train_images, Y_train), (test_images, Y_test) = cifar10.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:].flatten()
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid].flatten()
Y_test = Y_test.flatten()
print(len(X_train), len(X_valid), len(X_test))
datasets['cifar10'] = {'X':{}, 'Y':{}}
datasets['cifar10']['X']['train'] = X_train
datasets['cifar10']['X']['valid'] = X_valid
datasets['cifar10']['X']['test'] = X_test
datasets['cifar10']['Y']['train'] = Y_train
datasets['cifar10']['Y']['valid'] = Y_valid
datasets['cifar10']['Y']['test'] = Y_test
###Output
_____no_output_____
###Markdown
FMNIST
###Code
from tensorflow.keras.datasets import fashion_mnist
# load dataset
(train_images, Y_train), (test_images, Y_test) = fashion_mnist.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
print(len(X_train), len(X_valid), len(X_test))
datasets['fmnist'] = {'X':{}, 'Y':{}}
datasets['fmnist']['X']['train'] = X_train
datasets['fmnist']['X']['valid'] = X_valid
datasets['fmnist']['X']['test'] = X_test
datasets['fmnist']['Y']['train'] = Y_train
datasets['fmnist']['Y']['valid'] = Y_valid
datasets['fmnist']['Y']['test'] = Y_test
###Output
_____no_output_____
###Markdown
Cassins
###Code
syllable_df = pd.read_pickle(DATA_DIR/'cassins'/ 'cassins.pickle').drop(columns=['audio'])
top_labels = (
pd.DataFrame(
{i: [np.sum(syllable_df.labels.values == i)] for i in syllable_df.labels.unique()}
)
.T.sort_values(by=0, ascending=False)[:20]
.T
)
top_labels
sylllable_df = syllable_df[syllable_df.labels.isin(top_labels.columns)]
sylllable_df = sylllable_df.reset_index()
specs = np.array(list(sylllable_df.spectrogram.values))
specs.shape
sylllable_df['subset'] = 'train'
sylllable_df.loc[:1000, 'subset'] = 'valid'
sylllable_df.loc[1000:1999, 'subset'] = 'test'
len(sylllable_df)
Y_train = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'train']))
Y_valid = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'valid']))
Y_test = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'test']))
X_train = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'train'])) #/ 255.
X_valid = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'valid']))# / 255.
X_test = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'test'])) #/ 255.
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).astype('int').flatten()
datasets['cassins_dtw'] = {'X':{}, 'Y':{}}
datasets['cassins_dtw']['X']['train'] = X_train
datasets['cassins_dtw']['X']['valid'] = X_valid
datasets['cassins_dtw']['X']['test'] = X_test
datasets['cassins_dtw']['Y']['train'] = Y_train
datasets['cassins_dtw']['Y']['valid'] = Y_valid
datasets['cassins_dtw']['Y']['test'] = Y_test
###Output
_____no_output_____
###Markdown
Moons
###Code
from sklearn.datasets import make_moons
X_train, Y_train = make_moons(1000, random_state=0, noise=0.1)
X_train_flat = X_train
X_test, Y_test = make_moons(1000, random_state=1, noise=0.1)
X_test_flat = X_test
X_valid, Y_valid = make_moons(1000, random_state=2, noise=0.1)
datasets['moons'] = {'X':{}, 'Y':{}}
datasets['moons']['X']['train'] = X_train
datasets['moons']['X']['valid'] = X_valid
datasets['moons']['X']['test'] = X_test
datasets['moons']['Y']['train'] = Y_train
datasets['moons']['Y']['valid'] = Y_valid
datasets['moons']['Y']['test'] = Y_test
###Output
_____no_output_____
###Markdown
Bison
###Code
import requests
import json
url = "https://raw.githubusercontent.com/duhaime/umap-zoo/03819ed0954b524919671a72f61a56032099ba11/data/json/bison.json"
animal = np.array(json.loads(requests.get(url).text)['3d'])
X_train = animal
Y_train = animal[:, 0]
datasets['bison'] = {'X':{}, 'Y':{}}
datasets['bison']['X']['train'] = X_train
datasets['bison']['Y']['train'] = Y_train
###Output
_____no_output_____
###Markdown
macoskco2015
###Code
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
import gzip
import pickle
with gzip.open(DATA_DIR / 'macosko_2015.pkl.gz', "rb") as f:
data = pickle.load(f)
x = data["pca_50"]
y = data["CellType1"].astype(str)
print("Data set contains %d samples with %d features" % x.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x, y, test_size=.1, random_state=42)
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
X_train_flat = X_train
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).flatten()
datasets['macosko2015'] = {'X':{}, 'Y':{}}
datasets['macosko2015']['X']['train'] = X_train
datasets['macosko2015']['X']['valid'] = X_valid
datasets['macosko2015']['X']['test'] = X_test
datasets['macosko2015']['Y']['train'] = Y_train
datasets['macosko2015']['Y']['valid'] = Y_valid
datasets['macosko2015']['Y']['test'] = Y_test
###Output
_____no_output_____
###Markdown
Plot data
###Code
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR, FIGURE_DIR, save_fig
dset_params = {
'moons': {
'cmap': plt.cm.coolwarm,
's': 10,
'alpha': 0.5,
},
'mnist': {
'cmap': plt.cm.tab10,
's': 0.1,
'alpha': 0.5,
},
'fmnist': {
'cmap': plt.cm.tab10,
's': 0.1,
'alpha': 0.5,
},
'bison': {
'cmap': plt.cm.viridis,
's': 0.1,
'alpha': 0.5,
},
'cifar10': {
'cmap': plt.cm.tab10,
's': 0.1,
'alpha': 0.5,
},
'cassins_dtw': {
'cmap': plt.cm.tab20,
's': 0.1,
'alpha': 0.5,
},
'macosko2015': {
'cmap': plt.cm.tab20,
's': 0.1,
'alpha': 0.5,
},
}
###Output
_____no_output_____
###Markdown
MNIST
###Code
import seaborn as sns
dataset = 'mnist'
n_class = len(np.unique(datasets[dataset]['Y']['train']))
pal = sns.color_palette('tab10', n_class)
sns.palplot(pal)
ncols = 4
nrows = 5
nex_per_class = 2
x = datasets[dataset]['X']['train']
y = datasets[dataset]['Y']['train']
exs = np.concatenate([[(class_, x[y == class_][i]) for i in range(nex_per_class)] for class_ in np.unique(y)])
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols*2, nrows*2))
for ax, (class_, ex) in zip(axs.flatten(), enumerate(exs)):
ax.matshow(ex[1].reshape((28,28)), cmap= plt.cm.Greys)
#ax.axis('off')
ax.set_xticks([])
ax.set_yticks([])
for spine in ax.spines.values():
spine.set_edgecolor(pal[ex[0]])
spine.set_linewidth(10)
figdir = FIGURE_DIR / 'data_examples' / 'mnist'
ensure_dir(figdir)
save_fig(figdir, dpi = 300, save_pdf=True, pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
FMNIST
###Code
import seaborn as sns
dataset = 'fmnist'
n_class = len(np.unique(datasets[dataset]['Y']['train']))
pal = sns.color_palette('tab10', n_class)
sns.palplot(pal)
ncols = 4
nrows = 5
nex_per_class = 2
x = datasets[dataset]['X']['train']
y = datasets[dataset]['Y']['train']
exs = np.concatenate([[(class_, x[y == class_][i]) for i in range(nex_per_class)] for class_ in np.unique(y)])
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols*2, nrows*2))
for ax, (class_, ex) in zip(axs.flatten(), enumerate(exs)):
ax.matshow(ex[1].reshape((28,28)), cmap= plt.cm.Greys)
#ax.axis('off')
ax.set_xticks([])
ax.set_yticks([])
for spine in ax.spines.values():
spine.set_edgecolor(pal[ex[0]])
spine.set_linewidth(10)
figdir = FIGURE_DIR / 'data_examples' / 'fmnist'
ensure_dir(figdir)
save_fig(figdir, dpi = 300, save_pdf=True, pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
CIFAR10
###Code
import seaborn as sns
dataset = 'cifar10'
n_class = len(np.unique(datasets[dataset]['Y']['train']))
pal = sns.color_palette('tab10', n_class)
sns.palplot(pal)
ncols = 4
nrows = 5
nex_per_class = 2
x = datasets[dataset]['X']['train']
y = datasets[dataset]['Y']['train']
exs = np.concatenate([[(class_, x[y == class_][i]) for i in range(nex_per_class)] for class_ in np.unique(y)])
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols*2, nrows*2))
for ax, (class_, ex) in zip(axs.flatten(), enumerate(exs)):
ax.imshow(ex[1].reshape((32,32,3)), cmap= plt.cm.Greys)
#ax.axis('off')
ax.set_xticks([])
ax.set_yticks([])
for spine in ax.spines.values():
spine.set_edgecolor(pal[ex[0]])
spine.set_linewidth(10)
figdir = FIGURE_DIR / 'data_examples' / 'cifar10'
ensure_dir(figdir)
save_fig(figdir, dpi = 300, save_pdf=True, pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
cassins-dtw
###Code
import seaborn as sns
dataset = 'cassins_dtw'
n_class = len(np.unique(datasets[dataset]['Y']['train']))
pal = sns.color_palette('tab20', n_class)
sns.palplot(pal)
ncols = 4
nrows = 5
nex_per_class = 1
x = datasets[dataset]['X']['train']
y = datasets[dataset]['Y']['train']
exs = np.concatenate([[(class_, x[y == class_][i]) for i in range(nex_per_class)] for class_ in np.unique(y)])
exs = np.random.permutation(exs)
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols*2, nrows*2))
for ax, (class_, ex) in zip(axs.flatten(), enumerate(exs)):
ax.matshow(ex[1].reshape((32,31)), cmap= plt.cm.Greys)
#ax.axis('off')
ax.set_xticks([])
ax.set_yticks([])
for spine in ax.spines.values():
spine.set_edgecolor(pal[ex[0]])
spine.set_linewidth(10)
figdir = FIGURE_DIR / 'data_examples' / 'cassins'
ensure_dir(figdir)
save_fig(figdir, dpi = 300, save_pdf=True, pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
macosko2015
###Code
dataset = 'macosko2015'
n_class = len(np.unique(datasets[dataset]['Y']['train']))
pal = sns.color_palette('tab20', 20)
sns.palplot(pal)
11/12 * 20
y = datasets[dataset]['Y']['train']
inverse_labs = enc.inverse_transform([[i] for i in np.unique(y)])
inverse_labs
color_df = pd.DataFrame({enc.inverse_transform([[i]])[0][0]: (np.sum(y == i), i) for i in np.unique(y)}).T.sort_values(by=[0], ascending=False)
color_df
sns.palplot(dset_params[dataset]['cmap'](color_df[1].values / 11)[:7])
z = np.random.normal(size=(len(y), 2))
z[:,0]+=y
fig, ax = plt.subplots()
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=y,
cmap=dset_params[dataset]['cmap'],
s=dset_params[dataset]['s'],
alpha=dset_params[dataset]['alpha'],
)
fig.colorbar(sc)
Rods
Bipolar cells
Amacrine cells
Cones
Muller glia
Retinal ganglion cells
Horizontal cells
###Output
_____no_output_____
###Markdown
bison
###Code
fig, ax = plt.subplots()
ax.scatter(-animal[:,2], animal[:,1], s = 1, c = animal[:,0], alpha = 0.1)
ax.axis('equal')
ax.axis('off')
figdir = FIGURE_DIR / 'data_examples' / 'bison'
ensure_dir(figdir)
save_fig(figdir, dpi = 300, save_pdf=True, pad_inches = 0.1)
###Output
_____no_output_____
###Markdown
moons
###Code
x = datasets['moons']['X']['train']
y = datasets['moons']['Y']['train']
fig, ax = plt.subplots(figsize=(10,8))
ax.scatter(x[:,1], x[:,0], s = 20, c = y, alpha = 0.5, cmap = 'coolwarm')
ax.axis('equal')
ax.axis('off')
figdir = FIGURE_DIR / 'data_examples' / 'moons'
ensure_dir(figdir)
save_fig(figdir, dpi = 300, save_pdf=True, pad_inches = 0.1)
###Output
_____no_output_____ |
1. Intro to data visualization/911 Calls Exercise.ipynb | ###Markdown
911 Calls Exercise We will be analyzing some 911 call data from [Kaggle](https://www.kaggle.com/mchirico/montcoalert). The data contains the following fields:* lat : String variable, Latitude* lng: String variable, Longitude* desc: String variable, Description of the Emergency Call* zip: String variable, Zipcode* title: String variable, Title* timeStamp: String variable, YYYY-MM-DD HH:MM:SS* twp: String variable, Township* addr: String variable, Address* e: String variable, Dummy variable (always 1) Data and Setup ____** Import numpy and pandas **
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
** Import visualization libraries and set %matplotlib inline. **
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# bookmark
###Output
_____no_output_____
###Markdown
** Read in the csv file as a dataframe called df **
###Code
df = pd.read_csv('911.csv')
###Output
_____no_output_____
###Markdown
** Check the info() of the df **
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 99492 entries, 0 to 99491
Data columns (total 9 columns):
lat 99492 non-null float64
lng 99492 non-null float64
desc 99492 non-null object
zip 86637 non-null float64
title 99492 non-null object
timeStamp 99492 non-null object
twp 99449 non-null object
addr 98973 non-null object
e 99492 non-null int64
dtypes: float64(3), int64(1), object(5)
memory usage: 6.8+ MB
###Markdown
** Check the head of df **
###Code
df.head()
###Output
_____no_output_____
###Markdown
Basic Questions ** What are the top 5 zipcodes for 911 calls? **
###Code
df['zip'].value_counts().head()
###Output
_____no_output_____
###Markdown
** What are the top 5 townships (twp) for 911 calls? **
###Code
df['twp'].value_counts().head()
###Output
_____no_output_____
###Markdown
** Take a look at the 'title' column, how many unique title codes are there? **
###Code
#df['title'].unique()
df['title'].nunique()
###Output
_____no_output_____
###Markdown
Creating new features ** In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.** **For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS. **
###Code
df.head()
# Standard example to filter a string
s = "EMS: BACK PAINS/INJURY"
new_s = s.split(':')[0].replace('', '')
print(new_s)
#df.apply(lambda row: row.zip + row.zip, axis=1)
df.apply(lambda x: x.title.split(':')[0].replace('',''), axis=1) #x.title specifies the column of the data frame
df['Reason'] = df.apply(lambda x: x.title.split(':')[0].replace('',''), axis=1) #specifc axis=1 to run over the rows
df.head(10)
###Output
_____no_output_____
###Markdown
** What is the most common Reason for a 911 call based off of this new column? **
###Code
df['Reason'].value_counts()
###Output
_____no_output_____
###Markdown
** Now use seaborn to create a countplot of 911 calls by Reason. **
###Code
sns.set_style('whitegrid') #setting plot grid
sns.countplot(x='Reason', data=df)
###Output
_____no_output_____
###Markdown
___** Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column? **
###Code
type(df['timeStamp'][0])
###Output
_____no_output_____
###Markdown
** You should have seen that these timestamps are still strings. Use [pd.to_datetime](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) to convert the column from strings to DateTime objects. **
###Code
df.head()
df['timeStamp'] = pd.to_datetime(df['timeStamp']) # built-in method
type(df['timeStamp'][0])
time = df['timeStamp'].iloc[0] #date time attributes
print(time.day)
print(time.year)
print(time.date())
print(time.hour)
print(time.month_name())
print(time.day_name())
###Output
10
2015
2015-12-10
17
December
Thursday
###Markdown
** You can now grab specific attributes from a Datetime object by calling them. For example:** time = df['timeStamp'].iloc[0] time.hour**You can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.**
###Code
time = df['timeStamp'].iloc[0]
print(time.hour)
print(time.month_name())
print(time.day_name())
df['Hour'] = df.apply(lambda x: x['timeStamp'].hour, axis=1) #using DateTime objects
df['Month'] = df.apply(lambda x: x['timeStamp'].month_name(), axis=1)
df['Day of Week'] = df.apply(lambda x: x['timeStamp'].day_name(), axis=1)
#df.head(10)
df.head()
###Output
_____no_output_____
###Markdown
** Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week: ** dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'} ** Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column. **
###Code
sns.countplot(x='Day of Week', data=df, hue='Reason')
###Output
_____no_output_____
###Markdown
**Now do the same for Month:**
###Code
df['Month No'] = df.apply(lambda x: x['timeStamp'].month, axis=1)
sns.countplot(x='Month No', data=df, hue='Reason')
###Output
_____no_output_____
###Markdown
**Did you notice something strange about the Plot?**____** You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas... ** ** Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame. **
###Code
df.head(1)
byMonth = df.groupby(['Month No'])[['lat','lng','desc','zip','title','timeStamp','twp','addr','e','Reason','Hour','Day of Week']].count()
#data.groupby(['col1', 'col2'])['col3'].mean()
byMonth.head()
###Output
_____no_output_____
###Markdown
** Now create a simple plot off of the dataframe indicating the count of calls per month. **
###Code
byMonth['twp'].plot()
###Output
_____no_output_____
###Markdown
** Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column. **
###Code
byMonth = byMonth.reset_index()
byMonth
sns.lmplot(x='Month No', y='twp', data=byMonth)
###Output
/home/onofre/anaconda3/envs/UABC_ML_Workshop/lib/python3.7/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method. **
###Code
df['Date'] = df['timeStamp'].apply(lambda t: t.date())
df.head()
###Output
_____no_output_____
###Markdown
** Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.**
###Code
byDate = df.groupby('Date').count()
byDate.head()
#byDate = byDate.reset_index()
byDate['twp'].plot(figsize=(7,5))
plt.tight_layout()
###Output
_____no_output_____
###Markdown
** Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call**
###Code
byDate.head()
df[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot(figsize=(7,5))
#byDate['twp'].plot(figsize=(7,5))
plt.title('Traffic')
plt.tight_layout()
df[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot(figsize=(7,5))
plt.title('Fire')
plt.tight_layout()
df[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot(figsize=(7,5))
plt.title('EMS')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
____** Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an [unstack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) method. Reference the solutions if you get stuck on this!**
###Code
dayHour = df.groupby(by=['Day of Week', 'Hour']).count()['Reason'].unstack()
dayHour
###Output
_____no_output_____
###Markdown
** Now create a HeatMap using this new DataFrame. **
###Code
plt.figure(figsize=(12,6))
sns.heatmap(dayHour, cmap="viridis")
###Output
_____no_output_____
###Markdown
** Now create a clustermap using this DataFrame. **
###Code
plt.figure(figsize=(12,6))
sns.clustermap(dayHour, cmap="viridis")
###Output
_____no_output_____
###Markdown
** Now repeat these same plots and operations, for a DataFrame that shows the Month as the column. **
###Code
dayMonth = df.groupby(by=['Day of Week','Month No']).count()['Reason'].unstack()
dayMonth
plt.figure(figsize=(12,7))
sns.heatmap(dayMonth, cmap='viridis')
plt.figure(figsize=(12,7))
sns.clustermap(dayMonth, cmap="viridis")
###Output
_____no_output_____ |
K Means - xclara .ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.cluster import KMeans
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
%config InlineBackend.figure_format = 'retina'
# Importing the dataset
data = pd.read_csv('xclara.csv')
print("Input Data and Shape")
print(data.shape)
data.head()
# Getting the values and plotting it
f1 = data['V1'].values
f2 = data['V2'].values
#X = np.array(list(zip(f1, f2)))
X = np.column_stack((f1,f2))
plt.scatter(f1, f2, c='black', s=7)
plt.show()
X
#Elbow Method
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 0)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('The Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
# Number of clusters
kmeans = KMeans(n_clusters=3)
# Fitting the input data
kmeans = kmeans.fit(X)
# Getting the cluster labels
labels = kmeans.predict(X)
# Centroid values
centroids = kmeans.cluster_centers_
# Comparing with scikit-learn centroids
print("Centroid values")
print(centroids) # From sci-kit learn
labels = kmeans.labels_
plt.scatter(X[:, 0], X[:, 1], c=labels)
plt.scatter(centroids[:, 0], centroids[:, 1], marker='*', c='#050505', s=1000)
plt.show()
###Output
_____no_output_____ |
notebook/chapter03_linear_regression.ipynb | ###Markdown
Utils
###Code
# Gauss
def gauss(x,mu,s):
#:params x: 1-D array data
#:prams mu,s: \mu and \sigma
#:return: \phi(x)
d = x.shape[0]
m = mu.shape[0]
phi = np.exp(-(x.reshape(-1,1) - mu)/(2*s**2))
phi = np.concatenate(([1],phi.ravel()))
return phi
# Function
def f(x):
return 3*np.sin(x)
# Make Toy Data
def make_toy_data(N,lower = 0,upper = 2*np.pi,std = 1):
X = np.random.rand(N)*(upper-lower) + lower
y = f(X) + np.random.randn(N)*std
return X.reshape(-1,1),y.reshape(-1,1)
# Color maps
cmaps = [[0.122, 0.467, 0.706],"orange","green"]
# Plot Prediction
def plot_prediction(X_tr,y_tr,regressor,title,lower = 0,upper = 2*np.pi):
X = np.linspace(lower,upper,100).reshape(-1,1)
y_pred = regressor.predict(X)
y_true = f(X)
rmse = np.mean((y_true-y_pred)**2)**0.5
print(f"RMSE : {rmse}")
fig,ax = plt.subplots(1,1,figsize = (10,7))
ax.plot(X,y_pred,label="Predict",color=cmaps[0])
ax.plot(X,y_true,label="Ground Truth",color=cmaps[1])
ax.scatter(X_tr,y_tr,label="Training Data",color=cmaps[2])
ax.set_title(title)
plt.legend()
plt.show()
# Plot Prediction with Std
def plot_prediction_with_std(X_tr,y_tr,bayes_regressor,title,lower = 0,upper = 2*np.pi):
X = np.linspace(lower,upper,100).reshape(-1,1)
y_pred,y_std = bayes_regressor.predict(X,return_std=True)
y_true = f(X)
rmse = np.mean((y_true-y_pred)**2)**0.5
print(f"RMSE : {rmse}")
fig,ax = plt.subplots(1,1,figsize = (10,7))
ax.plot(X,y_pred,label="Predict",color=cmaps[0])
y_pred_upper = y_pred + y_std
y_pred_lower = y_pred - y_std
ax.fill_between(X.ravel(),y_pred_lower.ravel(),y_pred_upper.ravel(),alpha=0.3,color=cmaps[0])
ax.plot(X,y_true,label="Ground Truth",color=cmaps[1])
ax.scatter(X_tr,y_tr,label="Training Data",color=cmaps[2])
ax.set_title(title)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Maximum Likelihood where $\Phi$ is a design matrix, $T$ is target matrix $$W_{ML} = (\Phi^T\Phi)^{-1}\Phi^T T$$ $$\frac{1}{\beta_{ML}} = \frac{1}{NK} \sum_{n = 1}^N ||t_n - W^T\phi(x_n)||^2$$
###Code
class LinearRegression():
def __init__(self,mu = None,s = None):
self.weight = None
self.beta = None
self.mu = mu
self.s = s
self.phi = lambda x:gauss(x,self.mu,self.s)
def fit(self,X,y):
#:params X: 2-D array (N_samples,N_dims)
#:params y: 2-D array (N_samples,N_targets)
N = X.shape[0]
K = y.shape[1]
design_mat = np.vstack([self.phi(x) for x in X])
self.weight = np.linalg.inv(design_mat.T@design_mat)@design_mat.T@y
tmp = y - self.weight.T@design_mat.T
self.beta = N*K/np.sum(tmp**2)
def predict(self,X):
#:params X: 2-D array (N_samples,N_dims) N_dims = len(mu) = len(s)
design_mat = np.vstack([self.phi(x) for x in X])
return np.dot(design_mat,self.weight)
X_tr,y_tr = make_toy_data(100,std = 0.75)
mu = np.random.rand(30)*6 - 3
s = np.random.rand(30)*5
lr = LinearRegression(mu = mu,s = s)
lr.fit(X_tr,y_tr)
plot_prediction(X_tr,y_tr,lr,"LinearRegression")
###Output
RMSE : 0.9931091929541018
###Markdown
Ridge almost same as LinearRegression but it has a regularization term.
###Code
class Ridge():
def __init__(self,lamda=1e-2,mu = None,s = None):
self.weight = None
self.lamda = lamda
self.mu = mu
self.s = s
self.phi = lambda x:gauss(x,self.mu,self.s)
def fit(self,X,y):
#:params X: 2-D array (N_samples,N_dims)
#:params y: 2-D array (N_samples,N_targets)
M = X.shape[1]*self.mu.shape[0] + 1
design_mat = np.vstack([self.phi(x) for x in X])
self.weight = np.linalg.inv(self.lamda*np.eye(M) + design_mat.T@design_mat)@design_mat.T@y
def predict(self,X):
#:params X: 2-D array (N_samples,N_dims) N_dims = len(mu) = len(s)
design_mat = np.vstack([self.phi(x) for x in X])
return np.dot(design_mat,self.weight)
ridge = Ridge(lamda = 1e-8,mu = mu,s = s)
ridge.fit(X_tr,y_tr)
plot_prediction(X_tr,y_tr,ridge,"Ridge")
###Output
RMSE : 0.23627845905908115
###Markdown
Bayesian Linear Regression Prior Distributin for Weight $$p(w|\alpha) = \mathcal{N}(w|0,\alpha^{-1}I)$$ Posterior Distribution for Weights $$p(w|t) = \mathcal{N}(w|m_N,S_N)$$ where $$m_N = \beta S_N \Phi^\top t$$ $$S_N^{-1} = \alpha I + \beta \Phi^\top \Phi$$ Predictive Distribution $$p(t|x) = \mathcal{N}(t|m_N^\top\phi(x),\sigma_N^2(x))$$ where $$\sigma_N^2(x) = \frac{1}{\beta} + \phi(x)^\top S_N\phi(x)$$ Evidence $$\ln{p(t|\alpha,\beta)} = \frac{M}{2}\ln{\alpha} + \frac{N}{2}\ln{\beta} - E(m_N) + \frac{1}{2}\ln{|S_N|} - \frac{N}{2}\ln{2\pi}$$ where $$E(m_N) = \frac{\beta}{2}||t - \Phi m_N||^2 + \frac{\alpha}{2}m_N^\top m_N$$ Maximizing the evidence function Let $\lambda_i$ be eigenvalue of $\beta\Phi^\top\Phi$ $$\gamma = \sum_{i = 1}^M \frac{\lambda_i}{\alpha + \lambda_i}$$ $$\alpha = \frac{\gamma}{m_N^\top m_N}$$ $$\frac{1}{\beta} = \frac{1}{N - \gamma}||t - \Phi m_N||^2$$
###Code
class BayesianLinearRegression():
def __init__(self,alpha = 1e-1,beta = 1e-1,mu = None,s = None):
self.weight = None
self.S = None
self.M = None
self.N = 0
self.alpha = alpha
self.beta = beta
self.mu = mu
self.s = s
self.phi = lambda x:gauss(x,mu,s)
def fit(self,X,y,optimize_evidence = False,n_iters = 20,threshold = 1e-3):
#:params X: 2-D array (N_samples,N_dims)
#:params y: 1-D array (N_samples)
#:params optimze_evidence: if alpha and beta is optimized or not
self.N = X.shape[0]
self.M = X.shape[1]*self.mu.shape[0] + 1
if optimize_evidence:
self.optimize_evidence_(X,y,n_iters,threshold)
design_mat = np.vstack([self.phi(x) for x in X])
self.S = np.linalg.inv(self.alpha*np.eye(self.M) + self.beta*design_mat.T@design_mat)
self.weight = self.beta*self.S@design_mat.T@y
def partial_fit(self,X,y):
# Before this method is called, fit() should be called
self.N += X.shape[0]
design_mat = np.vstack([self.phi(x) for x in X])
S_old_inv = np.linalg.inv(self.S)
self.S = np.linalg.inv(S_old_inv + self.beta*design_mat.T@design_mat)
self.weight = self.S@([email protected] + self.beta*design_mat.T@y)
def calc_evidence_(self,tmp):
E = self.beta/2*tmp + self.alpha/2*np.dot(self.weight,self.weight)
evidence = self.M*np.log(self.alpha)/2 + self.N*np.log(self.beta)/2 - E + np.linalg.det(self.S) - self.N*np.log(2*np.pi)/2
return evidence
def optimize_evidence_(self,X,y,n_iters,threshold):
#:params n_iters: Number of times to optimize alpha and beta
#:params threshold: If the difference of evidence is lower than this,
design_mat = np.vstack([self.phi(x) for x in X])
C = design_mat.T@design_mat
org_lambdas,_ = np.linalg.eig(C)
with warnings.catch_warnings(): # Ignore Warnings
warnings.simplefilter('ignore')
org_lambdas = org_lambdas.astype(np.float64)
before_evidence = -10**10
for _ in range(n_iters):
self.S = np.linalg.inv(self.alpha*np.eye(self.M) + self.beta*C)
self.weight = self.beta*self.S@design_mat.T@y
lambdas = self.beta*org_lambdas
gamma = np.sum(lambdas/(lambdas + self.alpha))
self.alpha = gamma/np.dot(self.weight,self.weight)
tmp = y - [email protected]
tmp = np.dot(tmp,tmp)
self.beta = (self.N - gamma)/tmp
evidence = self.calc_evidence_(tmp)
if np.abs(before_evidence-evidence) < threshold:
break
before_evidence = evidence
def predict(self,X,return_std = False):
design_mat = np.vstack([self.phi(x) for x in X])
pred = np.dot(design_mat,self.weight).ravel()
if return_std:
std = np.sqrt(1/self.beta + np.diag([email protected]@design_mat.T))
return pred,std
else:
return pred
blr = BayesianLinearRegression(alpha = 1e-7,beta = 1.3,mu = mu,s = s)
blr.fit(X_tr,y_tr.ravel())
plot_prediction_with_std(X_tr,y_tr,blr,"BayesianLinearRegression")
# partial fit
X_new,y_new = make_toy_data(100,lower = 0,upper = np.pi,std = 0.5)
blr.partial_fit(X_new,y_new.ravel())
plot_prediction_with_std(np.concatenate([X_tr,X_new]),
np.concatenate([y_tr,y_new]),
blr,"BayesianLinearRegression",lower = 0,upper = 2*np.pi)
# With Evidence Optimization
blr = BayesianLinearRegression(alpha = 10,beta = 10,mu = mu,s = s)
blr.fit(X_tr,y_tr.ravel(),optimize_evidence = True,n_iters = 100,threshold = 1e-8)
print(f"Optimized alpha : {blr.alpha}")
print(f"Optimized beta : {blr.beta}")
plot_prediction_with_std(X_tr,y_tr,blr,"BayesianLinearRegression with EvidenceOptimization")
###Output
Optimized alpha : 4.15731245173827e-06
Optimized beta : 2.0158980963232715
RMSE : 3.058773617893491
|
IPJComp/Jcomp_ImgProcessing.ipynb | ###Markdown
Image Augumentation1. All RGB Image data will used to form New Samples for Training2. New Samples will be transformed using Image Data Generator3. Images will be resampled using Normalization (Divide each pixels by 255) , Shear Range, Zoom Range, Brightness etc.
###Code
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import ImageDataGenerator, img_to_array
from numpy import expand_dims
img = cv2.imread("/users/mithun/IPJcomp/train/happy/303.jpg")
samples = expand_dims(img, axis=0)
samples.shape
# Rotation of Image Data
datagen = ImageDataGenerator(rotation_range=25)
IDG = datagen.flow(samples, batch_size = 1)
fig, ax = plt.subplots(1, 5, figsize = (18,18))
for i in range(5):
fig = plt.figure()
batch = IDG.next()
image1 = batch[0].astype('uint8')
ax[i].imshow(image1)
plt.show()
# Horizontal flip of Image Data
datagen = ImageDataGenerator(horizontal_flip=True)
IDG = datagen.flow(samples, batch_size = 1)
fig, ax = plt.subplots(1, 5, figsize = (18,18))
for i in range(5):
fig = plt.figure()
batch = IDG.next()
image1 = batch[0].astype('uint8')
ax[i].imshow(image1)
plt.show()
# Shear Change of Image Data
datagen = ImageDataGenerator(shear_range = 5)
IDG = datagen.flow(samples, batch_size = 1)
fig, ax = plt.subplots(1, 5, figsize = (18,18))
for i in range(5):
fig = plt.figure()
batch = IDG.next()
image1 = batch[0].astype('uint8')
ax[i].imshow(image1)
plt.show()
# Zoom Change of Image Data
datagen = ImageDataGenerator(zoom_range= 0.2)
IDG = datagen.flow(samples, batch_size = 1)
fig, ax = plt.subplots(1, 5, figsize = (18,18))
for i in range(5):
fig = plt.figure()
batch = IDG.next()
image1 = batch[0].astype('uint8')
ax[i].imshow(image1)
plt.show()
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
shear_range=0.2,
zoom_range=0.2,
fill_mode='nearest',
horizontal_flip=True)
validation_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory(
'/users/mithun/IPJcomp/train/',
target_size=(64,64),
batch_size=32,
class_mode='categorical')
validation_set = train_datagen.flow_from_directory(
'/users/mithun/IPJcomp/validation/',
target_size=(64,64),
batch_size=32,
class_mode='categorical')
training_set.image_shape
training_set.class_indices
###Output
_____no_output_____
###Markdown
Building the CNN Model
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Conv2D, MaxPooling2D, Dropout
from tensorflow.keras.optimizers import Adam
def build_model():
# Feed Foreward NN
model = Sequential()
# Conv2D - I
# Padding = 'same' : This is zero padding
model.add(Conv2D(filters = 64, kernel_size = (3,3), activation = 'relu', padding = 'same',
input_shape = training_set.image_shape))
model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2)))
# Conv2D - II
model.add(Conv2D(filters = 128, kernel_size = (3,3), activation = 'relu', padding = 'same'))
model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2)))
# Conv2D - III
model.add(Conv2D(filters = 256, kernel_size = (3,3), activation = 'relu', padding = 'same'))
model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2)))
# Conv2D - IV
model.add(Conv2D(filters = 512, kernel_size = (3,3), activation = 'relu', padding = 'same'))
model.add(MaxPooling2D(pool_size = (2,2), strides = (2,2)))
# Flatten
model.add(Flatten())
# Full Connected layer (FC)
model.add(Dense(units = 256, activation = 'relu'))
model.add(Dropout(0.25))
model.add(Dense(units = 7, activation = 'softmax'))
# Learning rate
adam_optimizer = Adam(learning_rate = 0.001)
# loss = categorical_crossentropy
model.compile(optimizer = adam_optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
model = build_model()
model.summary()
# Callbacks
from tensorflow.keras import callbacks
filepath = "/users/mithun/IPJcomp/FaceExpr_Best_Model.hdf5"
checkpoint = callbacks.ModelCheckpoint(filepath, monitor='val_loss', save_best_only = True, mode = 'min',
verbose = 1)
checkpoint
###Output
_____no_output_____
###Markdown
steps_per_epoch = training_set.n//training_set.batch_size
###Code
# IMPLEMENTING LIVE DETECTION OF FACE expression
from tensorflow.keras.preprocessing import image
import datetime
model.load_weights("/users/mithun/IPJcomp/FaceExpr_Best_Model.hdf5")
color_dict={0:(0,255,0),1:(255,0,0),2:(153,0,153),3:(0,0,255),4:(125,125,125),5:(147,20,255),6:(255,0,255)}
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("/users/mithun/IPJcomp/haarcascade_frontalface_default.xml")
while cap.isOpened():
_,img=cap.read()
face=face_cascade.detectMultiScale(img,scaleFactor=1.3,minNeighbors=5)
for(x,y,w,h) in face:
face_img = img[y:y+h, x:x+w]
cv2.imwrite('temp.jpg',face_img)
test_image=image.load_img('temp.jpg',target_size=(64,64,3))
test_image=image.img_to_array(test_image)
test_image=np.expand_dims(test_image,axis=0)
pred=model.predict_classes(test_image)[0]
if pred==0:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Angry',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
elif pred==1:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Disgust',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
elif pred==2:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Fear',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
elif pred==3:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Happy',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
elif pred==4:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Neutral',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
elif pred==5:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Sad',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
elif pred==6:
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[pred],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[pred],-1)
cv2.putText(img,'Surprise',(x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(255,255,255),2)
datet=str(datetime.datetime.now())
cv2.putText(img,datet,(400,450),cv2.FONT_HERSHEY_SIMPLEX,0.5,(255,255,255),1)
cv2.imshow('img',img)
if cv2.waitKey(1)==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
tracking-caffe-model.ipynb | ###Markdown
Object Tracking Class
###Code
class Tracker:
def __init__(self, maxLost = 30): # maxLost: maximum object lost counted when the object is being tracked
self.nextObjectID = 0 # ID of next object
self.objects = OrderedDict() # stores ID:Locations
self.lost = OrderedDict() # stores ID:Lost_count
self.maxLost = maxLost # maximum number of frames object was not detected.
def addObject(self, new_object_location):
self.objects[self.nextObjectID] = new_object_location # store new object location
self.lost[self.nextObjectID] = 0 # initialize frame_counts for when new object is undetected
self.nextObjectID += 1
def removeObject(self, objectID): # remove tracker data after object is lost
del self.objects[objectID]
del self.lost[objectID]
@staticmethod
def getLocation(bounding_box):
xlt, ylt, xrb, yrb = bounding_box
return (int((xlt + xrb) / 2.0), int((ylt + yrb) / 2.0))
def update(self, detections):
if len(detections) == 0: # if no object detected in the frame
lost_ids = list(self.lost.keys())
for objectID in lost_ids:
self.lost[objectID] +=1
if self.lost[objectID] > self.maxLost: self.removeObject(objectID)
return self.objects
new_object_locations = np.zeros((len(detections), 2), dtype="int") # current object locations
for (i, detection) in enumerate(detections): new_object_locations[i] = self.getLocation(detection)
if len(self.objects)==0:
for i in range(0, len(detections)): self.addObject(new_object_locations[i])
else:
objectIDs = list(self.objects.keys())
previous_object_locations = np.array(list(self.objects.values()))
D = distance.cdist(previous_object_locations, new_object_locations) # pairwise distance between previous and current
row_idx = D.min(axis=1).argsort() # (minimum distance of previous from current).sort_as_per_index
cols_idx = D.argmin(axis=1)[row_idx] # index of minimum distance of previous from current
assignedRows, assignedCols = set(), set()
for (row, col) in zip(row_idx, cols_idx):
if row in assignedRows or col in assignedCols:
continue
objectID = objectIDs[row]
self.objects[objectID] = new_object_locations[col]
self.lost[objectID] = 0
assignedRows.add(row)
assignedCols.add(col)
unassignedRows = set(range(0, D.shape[0])).difference(assignedRows)
unassignedCols = set(range(0, D.shape[1])).difference(assignedCols)
if D.shape[0]>=D.shape[1]:
for row in unassignedRows:
objectID = objectIDs[row]
self.lost[objectID] += 1
if self.lost[objectID] > self.maxLost:
self.removeObject(objectID)
else:
for col in unassignedCols:
self.addObject(new_object_locations[col])
return self.objects
###Output
_____no_output_____
###Markdown
Loading Object Detector Model Face Detection and TrackingHere, the Face Detection Caffe Model is used.The files are taken from the following link:https://github.com/opencv/opencv_3rdparty/tree/dnn_samples_face_detector_20170830
###Code
caffemodel = {"prototxt":"./caffemodel_dir/deploy.prototxt",
"model":"./caffemodel_dir/res10_300x300_ssd_iter_140000.caffemodel",
"acc_threshold":0.50 # neglected detections with probability less than acc_threshold value
}
net = cv.dnn.readNetFromCaffe(caffemodel["prototxt"], caffemodel["model"])
###Output
_____no_output_____
###Markdown
Instantiate the Tracker Class
###Code
maxLost = 60 # maximum number of object losts counted when the object is being tracked
tracker = Tracker(maxLost = maxLost)
###Output
_____no_output_____
###Markdown
Initiate opencv video capture objectThe `video_src` can take two values:1. If `video_src=0`: OpenCV accesses the camera connected through USB2. If `video_src='video_file_path'`: OpenCV will access the video file at the given path (can be MP4, AVI, etc format)
###Code
video_src = 0
cap = cv.VideoCapture(video_src)
###Output
_____no_output_____
###Markdown
Start object detection and tracking
###Code
(H, W) = (None, None) # input image height and width for the network
while(True):
ok, image = cap.read()
if not ok:
print("Cannot read the video feed.")
break
image = cv.resize(image, (400, 400), interpolation = cv.INTER_AREA)
if W is None or H is None: (H, W) = image.shape[:2]
blob = cv.dnn.blobFromImage(image, 1.0, (W, H), (104.0, 177.0, 123.0))
net.setInput(blob)
detections = net.forward() # detect objects using object detection model
detections_bbox = [] # bounding box for detections
for i in range(0, detections.shape[2]):
if detections[0, 0, i, 2] > caffemodel["acc_threshold"]:
box = detections[0, 0, i, 3:7] * np.array([W, H, W, H])
detections_bbox.append(box.astype("int"))
# draw a bounding box surrounding the object so we can visualize it
(startX, startY, endX, endY) = box.astype("int")
cv.rectangle(image, (startX, startY), (endX, endY), (0, 255, 0), 2)
objects = tracker.update(detections_bbox) # update tracker based on the newly detected objects
for (objectID, centroid) in objects.items():
text = "ID {}".format(objectID)
cv.putText(image, text, (centroid[0] - 10, centroid[1] - 10), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
cv.circle(image, (centroid[0], centroid[1]), 4, (0, 255, 0), -1)
cv.imshow("image", image)
if cv.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv.destroyWindow("image")
###Output
_____no_output_____ |
interviewq_exercises/q105_stats_exponential_distribution_battery_time_to_break.ipynb | ###Markdown
Question 105 - Car battery - Exponential distribution questionStatistics Exponential DistributionA given car has a number of miles it can run before its battery is depleted, where the number of miles are exponentially distributed with an average of 10,000 miles to depletion.If a given individual needs to make a trip that's 3,000 miles, what is the probability that she/he will be able to complete the trip without having to replace the battery? You can assume the car battery is new for this problem.For some reading material on exponential distributions, you can visit this [link](https://www.probabilitycourse.com/chapter4/4_2_2_exponential.php).
###Code
from math import exp
mu = 10000 # avg miles to depletion
d = 3000 # distance in miles we wonder about
L = 1/mu # Exponential distribution has parameter L = 1/mu
chance = 1-exp(-L*d) # exp dist CDF is 1-exp(-Lx)
print(f'chance to fail at {d} miles is {chance:.2f}')
###Output
chance to fail at 3000 miles is 0.26
|
how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb | ###Markdown
Automated Machine Learning Forecasting away from training data Contents1. [Introduction](Introduction)2. [Setup](Setup)3. [Data](Data)4. [Prepare remote compute and data.](prepare_remote)4. [Create the configuration and train a forecaster](train)5. [Forecasting from the trained model](forecasting)6. [Forecasting away from training data](forecasting_away) IntroductionThis notebook demonstrates the full interface to the `forecast()` function. The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.Terminology:* forecast origin: the last period when the target value is known* forecast periods(s): the period(s) for which the value of the target is desired.* forecast horizon: the number of forecast periods* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.* prediction context: `lookback` periods immediately preceding the forecast origin Setup Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file.
###Code
import os
import pandas as pd
import numpy as np
import logging
import warnings
from azureml.core.dataset import Dataset
from pandas.tseries.frequencies import to_offset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
np.set_printoptions(precision=4, suppress=True, linewidth=120)
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-forecast-function-demo'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataFor the demonstration purposes we will generate the data artificially and use them for the forecasting.
###Code
TIME_COLUMN_NAME = 'date'
GRAIN_COLUMN_NAME = 'grain'
TARGET_COLUMN_NAME = 'y'
def get_timeseries(train_len: int,
test_len: int,
time_column_name: str,
target_column_name: str,
grain_column_name: str,
grains: int = 1,
freq: str = 'H'):
"""
Return the time series of designed length.
:param train_len: The length of training data (one series).
:type train_len: int
:param test_len: The length of testing data (one series).
:type test_len: int
:param time_column_name: The desired name of a time column.
:type time_column_name: str
:param
:param grains: The number of grains.
:type grains: int
:param freq: The frequency string representing pandas offset.
see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
:type freq: str
:returns: the tuple of train and test data sets.
:rtype: tuple
"""
data_train = [] # type: List[pd.DataFrame]
data_test = [] # type: List[pd.DataFrame]
data_length = train_len + test_len
for i in range(grains):
X = pd.DataFrame({
time_column_name: pd.date_range(start='2000-01-01',
periods=data_length,
freq=freq),
target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,
'ext_predictor': np.asarray(range(42, 42 + data_length)),
grain_column_name: np.repeat('g{}'.format(i), data_length)
})
data_train.append(X[:train_len])
data_test.append(X[train_len:])
X_train = pd.concat(data_train)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(data_test)
y_test = X_test.pop(target_column_name).values
return X_train, y_train, X_test, y_test
n_test_periods = 6
n_train_periods = 30
X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,
test_len=n_test_periods,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
###Output
_____no_output_____
###Markdown
Let's see what the training data looks like.
###Code
X_train.tail()
# plot the example time series
import matplotlib.pyplot as plt
whole_data = X_train.copy()
target_label = 'y'
whole_data[target_label] = y_train
for g in whole_data.groupby('grain'):
plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prepare remote compute and data. The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
# We need to save thw artificial data and then upload them to default workspace datastore.
DATA_PATH = "fc_fn_data"
DATA_PATH_X = "{}/data_train.csv".format(DATA_PATH)
if not os.path.isdir('data'):
os.mkdir('data')
pd.DataFrame(whole_data).to_csv("data/data_train.csv", index=False)
# Upload saved data to the default data store.
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path=DATA_PATH, overwrite=True, show_progress=True)
train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))
###Output
_____no_output_____
###Markdown
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
###Code
amlcompute_cluster_name = "cpu-cluster-fcfn"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.\n",
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
###Output
_____no_output_____
###Markdown
Create the configuration and train a forecaster First generate the configuration, in which we:* Set metadata columns: target, time column and grain column names.* Validate our data using cross validation with rolling window method.* Set normalized root mean squared error as a metric to select the best model.* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.* Set limitations on the length of experiment run to 15 minutes.* Finally, we set the task to be forecasting.* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.
###Code
lags = [1,2,3]
max_horizon = n_test_periods
time_series_settings = {
'time_column_name': TIME_COLUMN_NAME,
'grain_column_names': [ GRAIN_COLUMN_NAME ],
'max_horizon': max_horizon,
'target_lags': lags
}
###Output
_____no_output_____
###Markdown
Run the model selection and training process.
###Code
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_forecasting_function.log',
primary_metric='normalized_root_mean_squared_error',
experiment_timeout_minutes=15,
enable_early_stopping=True,
training_data=train_data,
compute_target=compute_target,
n_cross_validations=3,
verbosity = logging.INFO,
max_concurrent_iterations=4,
max_cores_per_iteration=-1,
label_column_name=target_label,
**time_series_settings)
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
# Retrieve the best model to use it further.
_, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Forecasting from the trained model In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data. X_train is directly followed by the X_testLet's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.We use `X_test` as a **forecast request** to generate the predictions. Typical path: X_test is known, forecast all upcoming periods
###Code
# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00
# These are predictions we are asking the model to make (does not contain thet target column y),
# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data
X_test
y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test)
# xy_nogap contains the predictions in the _automl_target_col column.
# Those same numbers are output in y_pred_no_gap
xy_nogap
###Output
_____no_output_____
###Markdown
Confidence intervals Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. This method accepts the same parameters as forecast().
###Code
quantiles = fitted_model.forecast_quantiles(X_test)
quantiles
###Output
_____no_output_____
###Markdown
Distribution forecastsOften the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". This is called a "service level". Here is how you get quantile forecasts.
###Code
# specify which quantiles you would like
fitted_model.quantiles = [0.01, 0.5, 0.95]
# use forecast_quantiles function, not the forecast() one
y_pred_quantiles = fitted_model.forecast_quantiles(X_test)
# it all nicely aligns column-wise
pd.concat([X_test.reset_index(), y_pred_quantiles], axis=1)
###Output
_____no_output_____
###Markdown
Destination-date forecast: "just do something"In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to "destination date". The destination date still needs to fit within the maximum horizon from training.
###Code
# We will take the destination date as a last date in the test set.
dest = max(X_test[TIME_COLUMN_NAME])
y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)
# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)
xy_dest
###Output
_____no_output_____
###Markdown
Forecasting away from training data Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model "looks back" -- uses previous values of the target -- then we somehow need to provide those values to the model.The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`).
###Code
# generate the same kind of test data we trained on,
# but now make the train set much longer, so that the test set will be in the future
X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long
test_len=4,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
# end of the data we trained on
print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())
# start of the data we want to predict on
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())
###Output
_____no_output_____
###Markdown
There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume.
###Code
try:
y_pred_away, xy_away = fitted_model.forecast(X_away)
xy_away
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! The model is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear.
###Code
def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):
"""
This function will take the full dataset, and create the query
to predict all values of the grain from the `forecast_origin`
forward for the next `horizon` horizons. Context from previous
`lookback` periods will be included.
fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.
time_column_name: string which column (must be in fulldata) is the time axis
target_column_name: string which column (must be in fulldata) is to be forecast
forecast_origin: datetime type the last time we (pretend to) have target values
horizon: timedelta how far forward, in time units (not periods)
lookback: timedelta how far back does the model look?
Example:
```
forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training
print(forecast_origin)
X_query, y_query = make_forecasting_query(data,
forecast_origin = forecast_origin,
horizon = pd.DateOffset(days=7), # 7 days into the future
lookback = pd.DateOffset(days=1), # model has lag 1 period (day)
)
```
"""
X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &
(fulldata[ time_column_name ] <= forecast_origin)
]
X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &
(fulldata[ time_column_name ] <= forecast_origin + horizon)
]
y_past = X_past.pop(target_column_name).values.astype(np.float)
y_future = X_future.pop(target_column_name).values.astype(np.float)
# Now take y_future and turn it into question marks
y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int
y_query.fill(np.NaN)
print("X_past is " + str(X_past.shape) + " - shaped")
print("X_future is " + str(X_future.shape) + " - shaped")
print("y_past is " + str(y_past.shape) + " - shaped")
print("y_query is " + str(y_query.shape) + " - shaped")
X_pred = pd.concat([X_past, X_future])
y_pred = np.concatenate([y_past, y_query])
return X_pred, y_pred
###Output
_____no_output_____
###Markdown
Let's see where the context data ends - it ends, by construction, just before the testing data starts.
###Code
print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
X_context.tail(5)
# Since the length of the lookback is 3,
# we need to add 3 periods from the context to the request
# so that the model has the data it needs
# Put the X and y back together for a while.
# They like each other and it makes them happy.
X_context[TARGET_COLUMN_NAME] = y_context
X_away[TARGET_COLUMN_NAME] = y_away
fulldata = pd.concat([X_context, X_away])
# forecast origin is the last point of data, which is one 1-hr period before test
forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)
# it is indeed the last point of the context
assert forecast_origin == X_context[TIME_COLUMN_NAME].max()
print("Forecast origin: " + str(forecast_origin))
# the model uses lags and rolling windows to look back in time
n_lookback_periods = max(lags)
lookback = pd.DateOffset(hours=n_lookback_periods)
horizon = pd.DateOffset(hours=max_horizon)
# now make the forecast query from context (refer to figure)
X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,
forecast_origin, horizon, lookback)
# show the forecast request aligned
X_show = X_pred.copy()
X_show[TARGET_COLUMN_NAME] = y_pred
X_show
###Output
_____no_output_____
###Markdown
Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast.
###Code
# Now everything works
y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)
# show the forecast aligned
X_show = xy_away.reset_index()
# without the generated features
X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]
# prediction is in _automl_target_col
###Output
_____no_output_____
###Markdown
Automated Machine Learning Forecasting away from training data Contents1. [Introduction](Introduction)2. [Setup](Setup)3. [Data](Data)4. [Prepare remote compute and data.](prepare_remote)4. [Create the configuration and train a forecaster](train)5. [Forecasting from the trained model](forecasting)6. [Forecasting away from training data](forecasting_away) IntroductionThis notebook demonstrates the full interface to the `forecast()` function. The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.Terminology:* forecast origin: the last period when the target value is known* forecast periods(s): the period(s) for which the value of the target is desired.* forecast horizon: the number of forecast periods* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.* prediction context: `lookback` periods immediately preceding the forecast origin Setup Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file.
###Code
import os
import pandas as pd
import numpy as np
import logging
import warnings
from azureml.core.dataset import Dataset
from pandas.tseries.frequencies import to_offset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
np.set_printoptions(precision=4, suppress=True, linewidth=120)
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-forecast-function-demo'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataFor the demonstration purposes we will generate the data artificially and use them for the forecasting.
###Code
TIME_COLUMN_NAME = 'date'
GRAIN_COLUMN_NAME = 'grain'
TARGET_COLUMN_NAME = 'y'
def get_timeseries(train_len: int,
test_len: int,
time_column_name: str,
target_column_name: str,
grain_column_name: str,
grains: int = 1,
freq: str = 'H'):
"""
Return the time series of designed length.
:param train_len: The length of training data (one series).
:type train_len: int
:param test_len: The length of testing data (one series).
:type test_len: int
:param time_column_name: The desired name of a time column.
:type time_column_name: str
:param
:param grains: The number of grains.
:type grains: int
:param freq: The frequency string representing pandas offset.
see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
:type freq: str
:returns: the tuple of train and test data sets.
:rtype: tuple
"""
data_train = [] # type: List[pd.DataFrame]
data_test = [] # type: List[pd.DataFrame]
data_length = train_len + test_len
for i in range(grains):
X = pd.DataFrame({
time_column_name: pd.date_range(start='2000-01-01',
periods=data_length,
freq=freq),
target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,
'ext_predictor': np.asarray(range(42, 42 + data_length)),
grain_column_name: np.repeat('g{}'.format(i), data_length)
})
data_train.append(X[:train_len])
data_test.append(X[train_len:])
X_train = pd.concat(data_train)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(data_test)
y_test = X_test.pop(target_column_name).values
return X_train, y_train, X_test, y_test
n_test_periods = 6
n_train_periods = 30
X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,
test_len=n_test_periods,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
###Output
_____no_output_____
###Markdown
Let's see what the training data looks like.
###Code
X_train.tail()
# plot the example time series
import matplotlib.pyplot as plt
whole_data = X_train.copy()
target_label = 'y'
whole_data[target_label] = y_train
for g in whole_data.groupby('grain'):
plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prepare remote compute and data. The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
# We need to save thw artificial data and then upload them to default workspace datastore.
DATA_PATH = "fc_fn_data"
DATA_PATH_X = "{}/data_train.csv".format(DATA_PATH)
if not os.path.isdir('data'):
os.mkdir('data')
pd.DataFrame(whole_data).to_csv("data/data_train.csv", index=False)
# Upload saved data to the default data store.
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path=DATA_PATH, overwrite=True, show_progress=True)
train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))
###Output
_____no_output_____
###Markdown
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
###Code
amlcompute_cluster_name = "cpu-cluster-fcfn"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.\n",
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
###Output
_____no_output_____
###Markdown
Create the configuration and train a forecaster First generate the configuration, in which we:* Set metadata columns: target, time column and grain column names.* Validate our data using cross validation with rolling window method.* Set normalized root mean squared error as a metric to select the best model.* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.* Set limitations on the length of experiment run to 15 minutes.* Finally, we set the task to be forecasting.* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.
###Code
lags = [1,2,3]
max_horizon = n_test_periods
time_series_settings = {
'time_column_name': TIME_COLUMN_NAME,
'grain_column_names': [ GRAIN_COLUMN_NAME ],
'max_horizon': max_horizon,
'target_lags': lags
}
###Output
_____no_output_____
###Markdown
Run the model selection and training process.
###Code
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_forecasting_function.log',
primary_metric='normalized_root_mean_squared_error',
experiment_timeout_minutes=15,
enable_early_stopping=True,
training_data=train_data,
compute_target=compute_target,
n_cross_validations=3,
verbosity = logging.INFO,
max_concurrent_iterations=4,
max_cores_per_iteration=-1,
label_column_name=target_label,
**time_series_settings)
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
# Retrieve the best model to use it further.
_, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Forecasting from the trained model In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data. X_train is directly followed by the X_testLet's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.The `X_test` and `y_query` below, taken together, form the **forecast request**. The two are interpreted as aligned - `y_query` could actally be a column in `X_test`. `NaN`s in `y_query` are the question marks. These will be filled with the forecasts.When the forecast period immediately follows the training period, the models retain the last few points of data. You can simply fill `y_query` filled with question marks - the model has the data for the lookback already. Typical path: X_test is known, forecast all upcoming periods
###Code
# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00
# These are predictions we are asking the model to make (does not contain thet target column y),
# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data
X_test
y_query = np.repeat(np.NaN, X_test.shape[0])
y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test, y_query)
# xy_nogap contains the predictions in the _automl_target_col column.
# Those same numbers are output in y_pred_no_gap
xy_nogap
###Output
_____no_output_____
###Markdown
Confidence intervals Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. This method accepts the same parameters as forecast().
###Code
quantiles = fitted_model.forecast_quantiles(X_test, y_query)
quantiles
###Output
_____no_output_____
###Markdown
Distribution forecastsOften the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". This is called a "service level". Here is how you get quantile forecasts.
###Code
# specify which quantiles you would like
fitted_model.quantiles = [0.01, 0.5, 0.95]
# use forecast_quantiles function, not the forecast() one
y_pred_quantiles = fitted_model.forecast_quantiles(X_test, y_query)
# it all nicely aligns column-wise
pd.concat([X_test.reset_index(), pd.DataFrame({'query' : y_query}), y_pred_quantiles], axis=1)
###Output
_____no_output_____
###Markdown
Destination-date forecast: "just do something"In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to "destination date". The destination date still needs to fit within the maximum horizon from training.
###Code
# We will take the destination date as a last date in the test set.
dest = max(X_test[TIME_COLUMN_NAME])
y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)
# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)
xy_dest
###Output
_____no_output_____
###Markdown
Forecasting away from training data Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model "looks back" -- uses previous values of the target -- then we somehow need to provide those values to the model.The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`).
###Code
# generate the same kind of test data we trained on,
# but now make the train set much longer, so that the test set will be in the future
X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long
test_len=4,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
# end of the data we trained on
print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())
# start of the data we want to predict on
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())
###Output
_____no_output_____
###Markdown
There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume.
###Code
try:
y_query = y_away.copy()
y_query.fill(np.NaN)
y_pred_away, xy_away = fitted_model.forecast(X_away, y_query)
xy_away
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! Because the model received all `NaN` (and not an actual target value), it is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear.
###Code
def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):
"""
This function will take the full dataset, and create the query
to predict all values of the grain from the `forecast_origin`
forward for the next `horizon` horizons. Context from previous
`lookback` periods will be included.
fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.
time_column_name: string which column (must be in fulldata) is the time axis
target_column_name: string which column (must be in fulldata) is to be forecast
forecast_origin: datetime type the last time we (pretend to) have target values
horizon: timedelta how far forward, in time units (not periods)
lookback: timedelta how far back does the model look?
Example:
```
forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training
print(forecast_origin)
X_query, y_query = make_forecasting_query(data,
forecast_origin = forecast_origin,
horizon = pd.DateOffset(days=7), # 7 days into the future
lookback = pd.DateOffset(days=1), # model has lag 1 period (day)
)
```
"""
X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &
(fulldata[ time_column_name ] <= forecast_origin)
]
X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &
(fulldata[ time_column_name ] <= forecast_origin + horizon)
]
y_past = X_past.pop(target_column_name).values.astype(np.float)
y_future = X_future.pop(target_column_name).values.astype(np.float)
# Now take y_future and turn it into question marks
y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int
y_query.fill(np.NaN)
print("X_past is " + str(X_past.shape) + " - shaped")
print("X_future is " + str(X_future.shape) + " - shaped")
print("y_past is " + str(y_past.shape) + " - shaped")
print("y_query is " + str(y_query.shape) + " - shaped")
X_pred = pd.concat([X_past, X_future])
y_pred = np.concatenate([y_past, y_query])
return X_pred, y_pred
###Output
_____no_output_____
###Markdown
Let's see where the context data ends - it ends, by construction, just before the testing data starts.
###Code
print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
X_context.tail(5)
# Since the length of the lookback is 3,
# we need to add 3 periods from the context to the request
# so that the model has the data it needs
# Put the X and y back together for a while.
# They like each other and it makes them happy.
X_context[TARGET_COLUMN_NAME] = y_context
X_away[TARGET_COLUMN_NAME] = y_away
fulldata = pd.concat([X_context, X_away])
# forecast origin is the last point of data, which is one 1-hr period before test
forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)
# it is indeed the last point of the context
assert forecast_origin == X_context[TIME_COLUMN_NAME].max()
print("Forecast origin: " + str(forecast_origin))
# the model uses lags and rolling windows to look back in time
n_lookback_periods = max(lags)
lookback = pd.DateOffset(hours=n_lookback_periods)
horizon = pd.DateOffset(hours=max_horizon)
# now make the forecast query from context (refer to figure)
X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,
forecast_origin, horizon, lookback)
# show the forecast request aligned
X_show = X_pred.copy()
X_show[TARGET_COLUMN_NAME] = y_pred
X_show
###Output
_____no_output_____
###Markdown
Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast.
###Code
# Now everything works
y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)
# show the forecast aligned
X_show = xy_away.reset_index()
# without the generated features
X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]
# prediction is in _automl_target_col
###Output
_____no_output_____
###Markdown
Automated Machine Learning Forecasting away from training data Contents1. [Introduction](Introduction)2. [Setup](Setup)3. [Data](Data)4. [Prepare remote compute and data.](prepare_remote)4. [Create the configuration and train a forecaster](train)5. [Forecasting from the trained model](forecasting)6. [Forecasting away from training data](forecasting_away) IntroductionThis notebook demonstrates the full interface to the `forecast()` function. The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.Terminology:* forecast origin: the last period when the target value is known* forecast periods(s): the period(s) for which the value of the target is desired.* forecast horizon: the number of forecast periods* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.* prediction context: `lookback` periods immediately preceding the forecast origin Setup Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file.
###Code
import os
import pandas as pd
import numpy as np
import logging
import warnings
from azureml.core.dataset import Dataset
from pandas.tseries.frequencies import to_offset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
np.set_printoptions(precision=4, suppress=True, linewidth=120)
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
# set up workspace
import sys
sys.path.append(r'C:\Users\jp\Documents\GitHub\vault-private')
import credentials
ws = credentials.authenticate_AZR('gmail', 'testground') # auth & ws setup in one swing
# choose a name for the run history container in the workspace
experiment_name = 'automl-forecast-function-demo'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', 50)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
{'Authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkhsQzBSMTJza3hOWjFXUXdtak9GXzZ0X3RERSIsImtpZCI6IkhsQzBSMTJza3hOWjFXUXdtak9GXzZ0X3RERSJ9.eyJhdWQiOiJodHRwczovL21hbmFnZW1lbnQuY29yZS53aW5kb3dzLm5ldC8iLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9lMjE4ZGRjZC1jYTYyLTQzNzgtYmJlMS0xMDliZGQwNGU4YTMvIiwiaWF0IjoxNTgzMzY5NjQ4LCJuYmYiOjE1ODMzNjk2NDgsImV4cCI6MTU4MzM3MzU0OCwiYWlvIjoiNDJOZ1lNaHY4cFE5d0dmTjgrYjQxajhyakxSdkFnQT0iLCJhcHBpZCI6IjIwYWQ0NWFhLTQ3NGQtNGFmYy04MzNiLTllNjI5MjEzMmQzOSIsImFwcGlkYWNyIjoiMSIsImlkcCI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0L2UyMThkZGNkLWNhNjItNDM3OC1iYmUxLTEwOWJkZDA0ZThhMy8iLCJvaWQiOiJmZmVlMjRjYy0xNDY5LTQ1NWQtOTFkOC04YzhmZWI5MjJlMjIiLCJzdWIiOiJmZmVlMjRjYy0xNDY5LTQ1NWQtOTFkOC04YzhmZWI5MjJlMjIiLCJ0aWQiOiJlMjE4ZGRjZC1jYTYyLTQzNzgtYmJlMS0xMDliZGQwNGU4YTMiLCJ1dGkiOiJfdFltN0xpVENrS21Pd3NfWmo0NkFBIiwidmVyIjoiMS4wIn0.Z8woZGd5S1axHt0eU1u4WtyE2IC3m0BOnFs_O9mO_64vFA6duRpyQUJOSYAF0CHz5TtI-l9E4m1CNu_EbLjVX6oXOXGAAjdEO-A5bqF6hGzlq9jpVu5DAv6QZTYddThRGl4PpArJnqcwdV9m6NCIVK-Q4JaxpzG9VyRwdAg63u2RAf91lYrsqyCL7OfffzzTH7KMQxSxFErDcJy7YT193-6i1hfTvKmEy2sW4fGODlKiNc_iJE257zXhGuJyK9rWe2XHnThZdE81gRL9E0HqwSmqVKwDzhyBxvf1GbkPadO2_okMHr-qbqA3Aw_euECS6V1kAVQc_kHQtLYtHT1wMg'}
Found workspace testground at location eastus2
###Markdown
DataFor the demonstration purposes we will generate the data artificially and use them for the forecasting.
###Code
TIME_COLUMN_NAME = 'date'
GRAIN_COLUMN_NAME = 'grain'
TARGET_COLUMN_NAME = 'y'
def get_timeseries(train_len: int,
test_len: int,
time_column_name: str,
target_column_name: str,
grain_column_name: str,
grains: int = 1,
freq: str = 'H'):
"""
Return the time series of designed length.
:param train_len: The length of training data (one series).
:type train_len: int
:param test_len: The length of testing data (one series).
:type test_len: int
:param time_column_name: The desired name of a time column.
:type time_column_name: str
:param
:param grains: The number of grains.
:type grains: int
:param freq: The frequency string representing pandas offset.
see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
:type freq: str
:returns: the tuple of train and test data sets.
:rtype: tuple
"""
data_train = [] # type: List[pd.DataFrame]
data_test = [] # type: List[pd.DataFrame]
data_length = train_len + test_len
for i in range(grains):
X = pd.DataFrame({
time_column_name: pd.date_range(start='2000-01-01',
periods=data_length,
freq=freq),
target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,
'ext_predictor': np.asarray(range(42, 42 + data_length)),
grain_column_name: np.repeat('g{}'.format(i), data_length)
})
data_train.append(X[:train_len])
data_test.append(X[train_len:])
X_train = pd.concat(data_train)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(data_test)
y_test = X_test.pop(target_column_name).values
return X_train, y_train, X_test, y_test
n_test_periods = 6
n_train_periods = 30
X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,
test_len=n_test_periods,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
###Output
_____no_output_____
###Markdown
Let's see what the training data looks like.
###Code
X_train.tail()
# plot the example time series
import matplotlib.pyplot as plt
whole_data = X_train.copy()
target_label = 'y'
whole_data[target_label] = y_train
for g in whole_data.groupby('grain'):
plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prepare remote compute and data. The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
# We need to save thw artificial data and then upload them to default workspace datastore.
DATA_PATH = "fc_fn_data"
DATA_PATH_X = "{}/data_train.csv".format(DATA_PATH)
if not os.path.isdir('data'):
os.mkdir('data')
pd.DataFrame(whole_data).to_csv("data/data_train.csv", index=False)
# Upload saved data to the default data store.
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path=DATA_PATH, overwrite=True, show_progress=True)
train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))
train_data.to_pandas_dataframe().info()
forcast_origin = max(train_data.to_pandas_dataframe()['date'])
print(forcast_origin)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 60 entries, 0 to 59
Data columns (total 4 columns):
date 60 non-null datetime64[ns]
ext_predictor 60 non-null int64
grain 60 non-null object
y 60 non-null float64
dtypes: datetime64[ns](1), float64(1), int64(1), object(1)
memory usage: 2.0+ KB
2000-01-02 05:00:00
###Markdown
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
###Code
import pyautogui
cts = ws.compute_targets
answer = pyautogui.prompt(
text='Enter compute target (gpu, cpu, or local)',
title='Compute target',
default='cpu')
compute_dict = {'gpu':'gpu-cluster', 'cpu':'cpu-cluster', 'local':'gpu-local'}
compute_target_name = compute_dict[answer]
compute_target =cts[compute_target_name]
print(compute_target.name)
###Output
cpu-cluster
###Markdown
Create the configuration and train a forecaster First generate the configuration, in which we:* Set metadata columns: target, time column and grain column names.* Validate our data using cross validation with rolling window method.* Set normalized root mean squared error as a metric to select the best model.* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.* Set limitations on the length of experiment run to 15 minutes.* Finally, we set the task to be forecasting.* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.
###Code
lags = [1,2,3]
max_horizon = n_test_periods
time_series_settings = {
'time_column_name': TIME_COLUMN_NAME,
'grain_column_names': [ GRAIN_COLUMN_NAME ],
'max_horizon': max_horizon,
'target_lags': lags
}
###Output
_____no_output_____
###Markdown
Run the model selection and training process.
###Code
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from azureml.widgets import RunDetails
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_forecasting_function.log',
primary_metric='normalized_root_mean_squared_error',
experiment_timeout_hours=0.25,
enable_early_stopping=True,
training_data=train_data,
compute_target=compute_target,
n_cross_validations=3,
verbosity = logging.INFO,
max_concurrent_iterations=4,
max_cores_per_iteration=-1,
label_column_name=target_label,
**time_series_settings)
remote_run = experiment.submit(automl_config, show_output=True)
remote_run.wait_for_completion()
RunDetails(remote_run).show()
# Retrieve the best model to use it further.
best_run, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Forecasting from the trained model In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data. X_train is directly followed by the X_testLet's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.We use `X_test` as a **forecast request** to generate the predictions. Typical path: X_test is known, forecast all upcoming periods
###Code
# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00
# These are predictions we are asking the model to make (does not contain thet target column y),
# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data
X_test
y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test)
# xy_nogap contains the predictions in the _automl_target_col column.
# Those same numbers are output in y_pred_no_gap
xy_nogap
###Output
_____no_output_____
###Markdown
Confidence intervals Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. This method accepts the same parameters as forecast().
###Code
quantiles = fitted_model.forecast_quantiles(X_test)
quantiles
###Output
_____no_output_____
###Markdown
Distribution forecastsOften the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". This is called a "service level". Here is how you get quantile forecasts.
###Code
# specify which quantiles you would like
fitted_model.quantiles = [0.01, 0.5, 0.95]
# use forecast_quantiles function, not the forecast() one
y_pred_quantiles = fitted_model.forecast_quantiles(X_test)
# it all nicely aligns column-wise
pd.concat([X_test.reset_index(), y_pred_quantiles], axis=1)
###Output
_____no_output_____
###Markdown
Destination-date forecast: "just do something"In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to "destination date". The destination date still needs to fit within the maximum horizon from training.
###Code
# We will take the destination date as a last date in the test set.
dest = max(X_test[TIME_COLUMN_NAME])
y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)
# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)
print(xy_dest)
print(y_pred_dest)
###Output
ext_predictor \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 56
g1 2000-01-02 05:00:00 56
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 56
g1 2000-01-02 05:00:00 56
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 56
g1 2000-01-02 05:00:00 56
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 56
g1 2000-01-02 05:00:00 56
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 56
g1 2000-01-02 05:00:00 56
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 56
g1 2000-01-02 05:00:00 56
ext_predictor_WASNULL \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
horizon_origin \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 1
g1 2000-01-02 05:00:00 1
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 2
g1 2000-01-02 05:00:00 2
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 3
g1 2000-01-02 05:00:00 3
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 4
g1 2000-01-02 05:00:00 4
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 5
g1 2000-01-02 05:00:00 5
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 6
g1 2000-01-02 05:00:00 6
_automl_target_col_lag1H \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 29.57
g1 2000-01-02 05:00:00 34.82
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 29.57
g1 2000-01-02 05:00:00 34.82
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 29.57
g1 2000-01-02 05:00:00 34.82
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 29.57
g1 2000-01-02 05:00:00 34.82
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 29.57
g1 2000-01-02 05:00:00 34.82
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 29.57
g1 2000-01-02 05:00:00 34.82
_automl_target_col_lag2H \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 28.67
g1 2000-01-02 05:00:00 33.83
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 28.67
g1 2000-01-02 05:00:00 33.83
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 28.67
g1 2000-01-02 05:00:00 33.83
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 28.67
g1 2000-01-02 05:00:00 33.83
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 28.67
g1 2000-01-02 05:00:00 33.83
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 28.67
g1 2000-01-02 05:00:00 33.83
_automl_target_col_lag3H \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 27.88
g1 2000-01-02 05:00:00 32.72
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 27.88
g1 2000-01-02 05:00:00 32.72
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 27.88
g1 2000-01-02 05:00:00 32.72
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 27.88
g1 2000-01-02 05:00:00 32.72
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 27.88
g1 2000-01-02 05:00:00 32.72
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 27.88
g1 2000-01-02 05:00:00 32.72
grain_grain day hour am_pm \
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 0 2 6 0
g1 2000-01-02 05:00:00 1 2 6 0
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 0 2 7 0
g1 2000-01-02 05:00:00 1 2 7 0
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 0 2 8 0
g1 2000-01-02 05:00:00 1 2 8 0
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 0 2 9 0
g1 2000-01-02 05:00:00 1 2 9 0
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 0 2 10 0
g1 2000-01-02 05:00:00 1 2 10 0
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 0 2 11 0
g1 2000-01-02 05:00:00 1 2 11 0
hour12 _automl_target_col
date grain origin
2000-01-02 06:00:00 g0 2000-01-02 05:00:00 6 30.82
g1 2000-01-02 05:00:00 6 35.98
2000-01-02 07:00:00 g0 2000-01-02 05:00:00 7 31.79
g1 2000-01-02 05:00:00 7 36.95
2000-01-02 08:00:00 g0 2000-01-02 05:00:00 8 32.77
g1 2000-01-02 05:00:00 8 37.93
2000-01-02 09:00:00 g0 2000-01-02 05:00:00 9 33.74
g1 2000-01-02 05:00:00 9 38.90
2000-01-02 10:00:00 g0 2000-01-02 05:00:00 10 34.72
g1 2000-01-02 05:00:00 10 39.88
2000-01-02 11:00:00 g0 2000-01-02 05:00:00 11 35.70
g1 2000-01-02 05:00:00 11 40.85
[30.8177 35.9753 31.7934 36.951 32.7691 37.9267 33.7449 38.9024 34.7206 39.8782 35.6963 40.8539]
###Markdown
Forecasting away from training data Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model "looks back" -- uses previous values of the target -- then we somehow need to provide those values to the model.The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`).
###Code
# generate the same kind of test data we trained on,
# but now make the train set much longer, so that the test set will be in the future
X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long
test_len=4,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
# end of the data we trained on
print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())
# start of the data we want to predict on
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())
###Output
grain
g0 2000-01-02 05:00:00
g1 2000-01-02 05:00:00
Name: date, dtype: datetime64[ns]
grain
g0 2000-01-02 18:00:00
g1 2000-01-02 18:00:00
Name: date, dtype: datetime64[ns]
###Markdown
There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume.
###Code
try:
y_pred_away, xy_away = fitted_model.forecast(X_away)
xy_away
except Exception as e:
print(e)
###Output
WrongShapeDataError:
Message: Input prediction data X_pred or input forecast_destination contains dates later than maximum forecast horizon. Please shorten the prediction data so that it is within the maximum horizon or adjust the forecast_destination date.
InnerException None
ErrorResponse
{
"error": {
"code": "UserError",
"inner_error": {
"code": "InvalidData",
"inner_error": {
"code": "DataShape"
}
},
"message": "Input prediction data X_pred or input forecast_destination contains dates later than maximum forecast horizon. Please shorten the prediction data so that it is within the maximum horizon or adjust the forecast_destination date."
}
}
###Markdown
How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! The model is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear.
###Code
def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):
"""
This function will take the full dataset, and create the query
to predict all values of the grain from the `forecast_origin`
forward for the next `horizon` horizons. Context from previous
`lookback` periods will be included.
fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.
time_column_name: string which column (must be in fulldata) is the time axis
target_column_name: string which column (must be in fulldata) is to be forecast
forecast_origin: datetime type the last time we (pretend to) have target values
horizon: timedelta how far forward, in time units (not periods)
lookback: timedelta how far back does the model look?
Example:
```
forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training
print(forecast_origin)
X_query, y_query = make_forecasting_query(data,
forecast_origin = forecast_origin,
horizon = pd.DateOffset(days=7), # 7 days into the future
lookback = pd.DateOffset(days=1), # model has lag 1 period (day)
)
```
"""
X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &
(fulldata[ time_column_name ] <= forecast_origin)
]
X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &
(fulldata[ time_column_name ] <= forecast_origin + horizon)
]
y_past = X_past.pop(target_column_name).values.astype(np.float)
y_future = X_future.pop(target_column_name).values.astype(np.float)
# Now take y_future and turn it into question marks
y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int
y_query.fill(np.NaN)
print("X_past is " + str(X_past.shape) + " - shaped")
print("X_future is " + str(X_future.shape) + " - shaped")
print("y_past is " + str(y_past.shape) + " - shaped")
print("y_query is " + str(y_query.shape) + " - shaped")
X_pred = pd.concat([X_past, X_future])
y_pred = np.concatenate([y_past, y_query])
return X_pred, y_pred
###Output
_____no_output_____
###Markdown
Let's see where the context data ends - it ends, by construction, just before the testing data starts.
###Code
print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
X_context.tail(5)
dir(horizon)
# Since the length of the lookback is 3,
# we need to add 3 periods from the context to the request
# so that the model has the data it needs
# Put the X and y back together for a while.
# They like each other and it makes them happy.
X_context[TARGET_COLUMN_NAME] = y_context
X_away[TARGET_COLUMN_NAME] = y_away
fulldata = pd.concat([X_context, X_away])
# forecast origin is the last point of data, which is one 1-hr period before test
forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)
# it is indeed the last point of the context
assert forecast_origin == X_context[TIME_COLUMN_NAME].max()
print("Forecast origin: " + str(forecast_origin))
# the model uses lags and rolling windows to look back in time
n_lookback_periods = max(lags)
lookback = pd.DateOffset(hours=n_lookback_periods)
horizon = pd.DateOffset(hours=max_horizon)
# now make the forecast query from context (refer to figure)
X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,
forecast_origin, horizon, lookback)
# show the forecast request aligned
X_show = X_pred.copy()
X_show[TARGET_COLUMN_NAME] = y_pred
X_show
###Output
Forecast origin: 2000-01-02 17:00:00
X_past is (6, 3) - shaped
X_future is (8, 3) - shaped
y_past is (6,) - shaped
y_query is (8,) - shaped
###Markdown
Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast.
###Code
# Now everything works
y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)
# show the forecast aligned
X_show = xy_away.reset_index()
# without the generated features
X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]
# prediction is in _automl_target_col
X_pred
###Output
_____no_output_____
###Markdown
Automated Machine Learning Forecasting away from training data Contents1. [Introduction](Introduction)2. [Setup](Setup)3. [Data](Data)4. [Prepare remote compute and data.](prepare_remote)4. [Create the configuration and train a forecaster](train)5. [Forecasting from the trained model](forecasting)6. [Forecasting away from training data](forecasting_away) IntroductionThis notebook demonstrates the full interface to the `forecast()` function. The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.Terminology:* forecast origin: the last period when the target value is known* forecast periods(s): the period(s) for which the value of the target is desired.* forecast horizon: the number of forecast periods* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.* prediction context: `lookback` periods immediately preceding the forecast origin Setup Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file.
###Code
import os
import pandas as pd
import numpy as np
import logging
import warnings
from azureml.core.dataset import Dataset
from pandas.tseries.frequencies import to_offset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
np.set_printoptions(precision=4, suppress=True, linewidth=120)
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-forecast-function-demo'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataFor the demonstration purposes we will generate the data artificially and use them for the forecasting.
###Code
TIME_COLUMN_NAME = 'date'
GRAIN_COLUMN_NAME = 'grain'
TARGET_COLUMN_NAME = 'y'
def get_timeseries(train_len: int,
test_len: int,
time_column_name: str,
target_column_name: str,
grain_column_name: str,
grains: int = 1,
freq: str = 'H'):
"""
Return the time series of designed length.
:param train_len: The length of training data (one series).
:type train_len: int
:param test_len: The length of testing data (one series).
:type test_len: int
:param time_column_name: The desired name of a time column.
:type time_column_name: str
:param
:param grains: The number of grains.
:type grains: int
:param freq: The frequency string representing pandas offset.
see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
:type freq: str
:returns: the tuple of train and test data sets.
:rtype: tuple
"""
data_train = [] # type: List[pd.DataFrame]
data_test = [] # type: List[pd.DataFrame]
data_length = train_len + test_len
for i in range(grains):
X = pd.DataFrame({
time_column_name: pd.date_range(start='2000-01-01',
periods=data_length,
freq=freq),
target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,
'ext_predictor': np.asarray(range(42, 42 + data_length)),
grain_column_name: np.repeat('g{}'.format(i), data_length)
})
data_train.append(X[:train_len])
data_test.append(X[train_len:])
X_train = pd.concat(data_train)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(data_test)
y_test = X_test.pop(target_column_name).values
return X_train, y_train, X_test, y_test
n_test_periods = 6
n_train_periods = 30
X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,
test_len=n_test_periods,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
###Output
_____no_output_____
###Markdown
Let's see what the training data looks like.
###Code
X_train.tail()
# plot the example time series
import matplotlib.pyplot as plt
whole_data = X_train.copy()
target_label = 'y'
whole_data[target_label] = y_train
for g in whole_data.groupby('grain'):
plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prepare remote compute and data. The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
# We need to save thw artificial data and then upload them to default workspace datastore.
DATA_PATH = "fc_fn_data"
DATA_PATH_X = "{}/data_train.csv".format(DATA_PATH)
if not os.path.isdir('data'):
os.mkdir('data')
pd.DataFrame(whole_data).to_csv("data/data_train.csv", index=False)
# Upload saved data to the default data store.
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path=DATA_PATH, overwrite=True, show_progress=True)
train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))
###Output
_____no_output_____
###Markdown
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
###Code
amlcompute_cluster_name = "cpu-cluster-fcfn"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.\n",
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
###Output
_____no_output_____
###Markdown
Create the configuration and train a forecaster First generate the configuration, in which we:* Set metadata columns: target, time column and grain column names.* Validate our data using cross validation with rolling window method.* Set normalized root mean squared error as a metric to select the best model.* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.* Set limitations on the length of experiment run to 15 minutes.* Finally, we set the task to be forecasting.* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.
###Code
lags = [1,2,3]
max_horizon = n_test_periods
time_series_settings = {
'time_column_name': TIME_COLUMN_NAME,
'grain_column_names': [ GRAIN_COLUMN_NAME ],
'max_horizon': max_horizon,
'target_lags': lags
}
###Output
_____no_output_____
###Markdown
Run the model selection and training process.
###Code
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_forecasting_function.log',
primary_metric='normalized_root_mean_squared_error',
experiment_timeout_minutes=15,
enable_early_stopping=True,
training_data=train_data,
compute_target=compute_target,
n_cross_validations=3,
verbosity = logging.INFO,
max_concurrent_iterations=4,
max_cores_per_iteration=-1,
label_column_name=target_label,
**time_series_settings)
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
# Retrieve the best model to use it further.
_, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Forecasting from the trained model In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data. X_train is directly followed by the X_testLet's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.The `X_test` and `y_query` below, taken together, form the **forecast request**. The two are interpreted as aligned - `y_query` could actally be a column in `X_test`. `NaN`s in `y_query` are the question marks. These will be filled with the forecasts.When the forecast period immediately follows the training period, the models retain the last few points of data. You can simply fill `y_query` filled with question marks - the model has the data for the lookback already. Typical path: X_test is known, forecast all upcoming periods
###Code
# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00
# These are predictions we are asking the model to make (does not contain thet target column y),
# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data
X_test
y_query = np.repeat(np.NaN, X_test.shape[0])
y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test, y_query)
# xy_nogap contains the predictions in the _automl_target_col column.
# Those same numbers are output in y_pred_no_gap
xy_nogap
###Output
_____no_output_____
###Markdown
Confidence intervals Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. This method accepts the same parameters as forecast().
###Code
quantiles = fitted_model.forecast_quantiles(X_test, y_query)
quantiles
###Output
_____no_output_____
###Markdown
Distribution forecastsOften the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. This arises when the forecast is used to control some kind of inventory, for example of grocery items of virtual machines for a cloud service. In such case, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". This is called a "service level". Here is how you get quantile forecasts.
###Code
# specify which quantiles you would like
fitted_model.quantiles = [0.01, 0.5, 0.95]
# use forecast_quantiles function, not the forecast() one
y_pred_quantiles = fitted_model.forecast_quantiles(X_test, y_query)
# it all nicely aligns column-wise
pd.concat([X_test.reset_index(), pd.DataFrame({'query' : y_query}), y_pred_quantiles], axis=1)
###Output
_____no_output_____
###Markdown
Destination-date forecast: "just do something"In some scenarios, the X_test is not known. The forecast is likely to be weak, becaus it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to "destination date". The destination date still needs to fit within the maximum horizon from training.
###Code
# We will take the destination date as a last date in the test set.
dest = max(X_test[TIME_COLUMN_NAME])
y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)
# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)
xy_dest
###Output
_____no_output_____
###Markdown
Forecasting away from training data Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model "looks back" -- uses previous values of the target -- then we somehow need to provide those values to the model.The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`).
###Code
# generate the same kind of test data we trained on,
# but now make the train set much longer, so that the test set will be in the future
X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long
test_len=4,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
# end of the data we trained on
print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())
# start of the data we want to predict on
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())
###Output
_____no_output_____
###Markdown
There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume.
###Code
try:
y_query = y_away.copy()
y_query.fill(np.NaN)
y_pred_away, xy_away = fitted_model.forecast(X_away, y_query)
xy_away
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
How should we read that eror message? The forecast origin is at the last time themodel saw an actual values of `y` (the target). That was at the end of the training data! Because the model received all `NaN` (and not an actual target value), it is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear.
###Code
def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):
"""
This function will take the full dataset, and create the query
to predict all values of the grain from the `forecast_origin`
forward for the next `horizon` horizons. Context from previous
`lookback` periods will be included.
fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.
time_column_name: string which column (must be in fulldata) is the time axis
target_column_name: string which column (must be in fulldata) is to be forecast
forecast_origin: datetime type the last time we (pretend to) have target values
horizon: timedelta how far forward, in time units (not periods)
lookback: timedelta how far back does the model look?
Example:
```
forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training
print(forecast_origin)
X_query, y_query = make_forecasting_query(data,
forecast_origin = forecast_origin,
horizon = pd.DateOffset(days=7), # 7 days into the future
lookback = pd.DateOffset(days=1), # model has lag 1 period (day)
)
```
"""
X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &
(fulldata[ time_column_name ] <= forecast_origin)
]
X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &
(fulldata[ time_column_name ] <= forecast_origin + horizon)
]
y_past = X_past.pop(target_column_name).values.astype(np.float)
y_future = X_future.pop(target_column_name).values.astype(np.float)
# Now take y_future and turn it into question marks
y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int
y_query.fill(np.NaN)
print("X_past is " + str(X_past.shape) + " - shaped")
print("X_future is " + str(X_future.shape) + " - shaped")
print("y_past is " + str(y_past.shape) + " - shaped")
print("y_query is " + str(y_query.shape) + " - shaped")
X_pred = pd.concat([X_past, X_future])
y_pred = np.concatenate([y_past, y_query])
return X_pred, y_pred
###Output
_____no_output_____
###Markdown
Let's see where the context data ends - it ends, by construction, just before the testing data starts.
###Code
print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
X_context.tail(5)
# Since the length of the lookback is 3,
# we need to add 3 periods from the context to the request
# so that the model has the data it needs
# Put the X and y back together for a while.
# They like each other and it makes them happy.
X_context[TARGET_COLUMN_NAME] = y_context
X_away[TARGET_COLUMN_NAME] = y_away
fulldata = pd.concat([X_context, X_away])
# forecast origin is the last point of data, which is one 1-hr period before test
forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)
# it is indeed the last point of the context
assert forecast_origin == X_context[TIME_COLUMN_NAME].max()
print("Forecast origin: " + str(forecast_origin))
# the model uses lags and rolling windows to look back in time
n_lookback_periods = max(lags)
lookback = pd.DateOffset(hours=n_lookback_periods)
horizon = pd.DateOffset(hours=max_horizon)
# now make the forecast query from context (refer to figure)
X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,
forecast_origin, horizon, lookback)
# show the forecast request aligned
X_show = X_pred.copy()
X_show[TARGET_COLUMN_NAME] = y_pred
X_show
###Output
_____no_output_____
###Markdown
Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast.
###Code
# Now everything works
y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)
# show the forecast aligned
X_show = xy_away.reset_index()
# without the generated features
X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]
# prediction is in _automl_target_col
###Output
_____no_output_____
###Markdown
Automated Machine Learning Forecasting away from training data Contents1. [Introduction](Introduction)2. [Setup](Setup)3. [Data](Data)4. [Prepare remote compute and data.](prepare_remote)4. [Create the configuration and train a forecaster](train)5. [Forecasting from the trained model](forecasting)6. [Forecasting away from training data](forecasting_away) IntroductionThis notebook demonstrates the full interface to the `forecast()` function. The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.Terminology:* forecast origin: the last period when the target value is known* forecast periods(s): the period(s) for which the value of the target is desired.* forecast horizon: the number of forecast periods* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.* prediction context: `lookback` periods immediately preceding the forecast origin Setup Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file.
###Code
import os
import pandas as pd
import numpy as np
import logging
import warnings
from azureml.core.dataset import Dataset
from pandas.tseries.frequencies import to_offset
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
np.set_printoptions(precision=4, suppress=True, linewidth=120)
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-forecast-function-demo'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['SKU'] = ws.sku
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataFor the demonstration purposes we will generate the data artificially and use them for the forecasting.
###Code
TIME_COLUMN_NAME = 'date'
GRAIN_COLUMN_NAME = 'grain'
TARGET_COLUMN_NAME = 'y'
def get_timeseries(train_len: int,
test_len: int,
time_column_name: str,
target_column_name: str,
grain_column_name: str,
grains: int = 1,
freq: str = 'H'):
"""
Return the time series of designed length.
:param train_len: The length of training data (one series).
:type train_len: int
:param test_len: The length of testing data (one series).
:type test_len: int
:param time_column_name: The desired name of a time column.
:type time_column_name: str
:param
:param grains: The number of grains.
:type grains: int
:param freq: The frequency string representing pandas offset.
see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
:type freq: str
:returns: the tuple of train and test data sets.
:rtype: tuple
"""
data_train = [] # type: List[pd.DataFrame]
data_test = [] # type: List[pd.DataFrame]
data_length = train_len + test_len
for i in range(grains):
X = pd.DataFrame({
time_column_name: pd.date_range(start='2000-01-01',
periods=data_length,
freq=freq),
target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,
'ext_predictor': np.asarray(range(42, 42 + data_length)),
grain_column_name: np.repeat('g{}'.format(i), data_length)
})
data_train.append(X[:train_len])
data_test.append(X[train_len:])
X_train = pd.concat(data_train)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(data_test)
y_test = X_test.pop(target_column_name).values
return X_train, y_train, X_test, y_test
n_test_periods = 6
n_train_periods = 30
X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,
test_len=n_test_periods,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
###Output
_____no_output_____
###Markdown
Let's see what the training data looks like.
###Code
X_train.tail()
# plot the example time series
import matplotlib.pyplot as plt
whole_data = X_train.copy()
target_label = 'y'
whole_data[target_label] = y_train
for g in whole_data.groupby('grain'):
plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prepare remote compute and data. The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation.
###Code
# We need to save thw artificial data and then upload them to default workspace datastore.
DATA_PATH = "fc_fn_data"
DATA_PATH_X = "{}/data_train.csv".format(DATA_PATH)
if not os.path.isdir('data'):
os.mkdir('data')
pd.DataFrame(whole_data).to_csv("data/data_train.csv", index=False)
# Upload saved data to the default data store.
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path=DATA_PATH, overwrite=True, show_progress=True)
train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))
###Output
_____no_output_____
###Markdown
You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targetsamlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
###Code
amlcompute_cluster_name = "cpu-cluster-fcfn"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.\n",
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
###Output
_____no_output_____
###Markdown
Create the configuration and train a forecaster First generate the configuration, in which we:* Set metadata columns: target, time column and grain column names.* Validate our data using cross validation with rolling window method.* Set normalized root mean squared error as a metric to select the best model.* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.* Set limitations on the length of experiment run to 15 minutes.* Finally, we set the task to be forecasting.* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.
###Code
lags = [1,2,3]
max_horizon = n_test_periods
time_series_settings = {
'time_column_name': TIME_COLUMN_NAME,
'grain_column_names': [ GRAIN_COLUMN_NAME ],
'max_horizon': max_horizon,
'target_lags': lags
}
###Output
_____no_output_____
###Markdown
Run the model selection and training process.
###Code
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_forecasting_function.log',
primary_metric='normalized_root_mean_squared_error',
experiment_timeout_hours=0.25,
enable_early_stopping=True,
training_data=train_data,
compute_target=compute_target,
n_cross_validations=3,
verbosity = logging.INFO,
max_concurrent_iterations=4,
max_cores_per_iteration=-1,
label_column_name=target_label,
**time_series_settings)
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
# Retrieve the best model to use it further.
_, fitted_model = remote_run.get_output()
###Output
_____no_output_____
###Markdown
Forecasting from the trained model In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data. X_train is directly followed by the X_testLet's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.We use `X_test` as a **forecast request** to generate the predictions. Typical path: X_test is known, forecast all upcoming periods
###Code
# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00
# These are predictions we are asking the model to make (does not contain thet target column y),
# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data
X_test
y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test)
# xy_nogap contains the predictions in the _automl_target_col column.
# Those same numbers are output in y_pred_no_gap
xy_nogap
###Output
_____no_output_____
###Markdown
Confidence intervals Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. This method accepts the same parameters as forecast().
###Code
quantiles = fitted_model.forecast_quantiles(X_test)
quantiles
###Output
_____no_output_____
###Markdown
Distribution forecastsOften the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". This is called a "service level". Here is how you get quantile forecasts.
###Code
# specify which quantiles you would like
fitted_model.quantiles = [0.01, 0.5, 0.95]
# use forecast_quantiles function, not the forecast() one
y_pred_quantiles = fitted_model.forecast_quantiles(X_test)
# it all nicely aligns column-wise
pd.concat([X_test.reset_index(), y_pred_quantiles], axis=1)
###Output
_____no_output_____
###Markdown
Destination-date forecast: "just do something"In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to "destination date". The destination date still needs to fit within the maximum horizon from training.
###Code
# We will take the destination date as a last date in the test set.
dest = max(X_test[TIME_COLUMN_NAME])
y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)
# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)
xy_dest
###Output
_____no_output_____
###Markdown
Forecasting away from training data Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model "looks back" -- uses previous values of the target -- then we somehow need to provide those values to the model.The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`).
###Code
# generate the same kind of test data we trained on,
# but now make the train set much longer, so that the test set will be in the future
X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long
test_len=4,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
# end of the data we trained on
print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())
# start of the data we want to predict on
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())
###Output
_____no_output_____
###Markdown
There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume.
###Code
try:
y_pred_away, xy_away = fitted_model.forecast(X_away)
xy_away
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! The model is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear.
###Code
def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):
"""
This function will take the full dataset, and create the query
to predict all values of the grain from the `forecast_origin`
forward for the next `horizon` horizons. Context from previous
`lookback` periods will be included.
fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.
time_column_name: string which column (must be in fulldata) is the time axis
target_column_name: string which column (must be in fulldata) is to be forecast
forecast_origin: datetime type the last time we (pretend to) have target values
horizon: timedelta how far forward, in time units (not periods)
lookback: timedelta how far back does the model look?
Example:
```
forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training
print(forecast_origin)
X_query, y_query = make_forecasting_query(data,
forecast_origin = forecast_origin,
horizon = pd.DateOffset(days=7), # 7 days into the future
lookback = pd.DateOffset(days=1), # model has lag 1 period (day)
)
```
"""
X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &
(fulldata[ time_column_name ] <= forecast_origin)
]
X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &
(fulldata[ time_column_name ] <= forecast_origin + horizon)
]
y_past = X_past.pop(target_column_name).values.astype(np.float)
y_future = X_future.pop(target_column_name).values.astype(np.float)
# Now take y_future and turn it into question marks
y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int
y_query.fill(np.NaN)
print("X_past is " + str(X_past.shape) + " - shaped")
print("X_future is " + str(X_future.shape) + " - shaped")
print("y_past is " + str(y_past.shape) + " - shaped")
print("y_query is " + str(y_query.shape) + " - shaped")
X_pred = pd.concat([X_past, X_future])
y_pred = np.concatenate([y_past, y_query])
return X_pred, y_pred
###Output
_____no_output_____
###Markdown
Let's see where the context data ends - it ends, by construction, just before the testing data starts.
###Code
print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
X_context.tail(5)
# Since the length of the lookback is 3,
# we need to add 3 periods from the context to the request
# so that the model has the data it needs
# Put the X and y back together for a while.
# They like each other and it makes them happy.
X_context[TARGET_COLUMN_NAME] = y_context
X_away[TARGET_COLUMN_NAME] = y_away
fulldata = pd.concat([X_context, X_away])
# forecast origin is the last point of data, which is one 1-hr period before test
forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)
# it is indeed the last point of the context
assert forecast_origin == X_context[TIME_COLUMN_NAME].max()
print("Forecast origin: " + str(forecast_origin))
# the model uses lags and rolling windows to look back in time
n_lookback_periods = max(lags)
lookback = pd.DateOffset(hours=n_lookback_periods)
horizon = pd.DateOffset(hours=max_horizon)
# now make the forecast query from context (refer to figure)
X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,
forecast_origin, horizon, lookback)
# show the forecast request aligned
X_show = X_pred.copy()
X_show[TARGET_COLUMN_NAME] = y_pred
X_show
###Output
_____no_output_____
###Markdown
Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast.
###Code
# Now everything works
y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)
# show the forecast aligned
X_show = xy_away.reset_index()
# without the generated features
X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]
# prediction is in _automl_target_col
###Output
_____no_output_____
###Markdown
Automated Machine Learning Forecasting away from training dataThis notebook demonstrates the full interface to the `forecast()` function. The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.Terminology:* forecast origin: the last period when the target value is known* forecast periods(s): the period(s) for which the value of the target is desired.* forecast horizon: the number of forecast periods* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.* prediction context: `lookback` periods immediately preceding the forecast origin Setup Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file.
###Code
import pandas as pd
import numpy as np
import logging
import warnings
from pandas.tseries.frequencies import to_offset
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
np.set_printoptions(precision=4, suppress=True, linewidth=120)
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-forecast-function-demo'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataFor the demonstration purposes we will generate the data artificially and use them for the forecasting.
###Code
TIME_COLUMN_NAME = 'date'
GRAIN_COLUMN_NAME = 'grain'
TARGET_COLUMN_NAME = 'y'
def get_timeseries(train_len: int,
test_len: int,
time_column_name: str,
target_column_name: str,
grain_column_name: str,
grains: int = 1,
freq: str = 'H'):
"""
Return the time series of designed length.
:param train_len: The length of training data (one series).
:type train_len: int
:param test_len: The length of testing data (one series).
:type test_len: int
:param time_column_name: The desired name of a time column.
:type time_column_name: str
:param
:param grains: The number of grains.
:type grains: int
:param freq: The frequency string representing pandas offset.
see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
:type freq: str
:returns: the tuple of train and test data sets.
:rtype: tuple
"""
data_train = [] # type: List[pd.DataFrame]
data_test = [] # type: List[pd.DataFrame]
data_length = train_len + test_len
for i in range(grains):
X = pd.DataFrame({
time_column_name: pd.date_range(start='2000-01-01',
periods=data_length,
freq=freq),
target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,
'ext_predictor': np.asarray(range(42, 42 + data_length)),
grain_column_name: np.repeat('g{}'.format(i), data_length)
})
data_train.append(X[:train_len])
data_test.append(X[train_len:])
X_train = pd.concat(data_train)
y_train = X_train.pop(target_column_name).values
X_test = pd.concat(data_test)
y_test = X_test.pop(target_column_name).values
return X_train, y_train, X_test, y_test
n_test_periods = 6
n_train_periods = 30
X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,
test_len=n_test_periods,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
###Output
_____no_output_____
###Markdown
Let's see what the training data looks like.
###Code
X_train.tail()
# plot the example time series
import matplotlib.pyplot as plt
whole_data = X_train.copy()
whole_data['y'] = y_train
for g in whole_data.groupby('grain'):
plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Create the configuration and train a forecasterFirst generate the configuration, in which we:* Set metadata columns: target, time column and grain column names.* Ask for 10 iterations through models, last of which will represent the Ensemble of previous ones.* Validate our data using cross validation with rolling window method.* Set normalized root mean squared error as a metric to select the best model.* Finally, we set the task to be forecasting.* By default, we apply the lag lead operator and rolling window to the target value i.e. we use the previous values as a predictor for the future ones.
###Code
lags = [1,2,3]
rolling_window_length = 0 # don't do rolling windows
max_horizon = n_test_periods
time_series_settings = {
'time_column_name': TIME_COLUMN_NAME,
'grain_column_names': [ GRAIN_COLUMN_NAME ],
'max_horizon': max_horizon,
'target_lags': lags
}
###Output
_____no_output_____
###Markdown
Run the model selection and training process.
###Code
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_forecasting_function.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
verbosity = logging.INFO,
**time_series_settings)
local_run = experiment.submit(automl_config, show_output=True)
# Retrieve the best model to use it further.
_, fitted_model = local_run.get_output()
###Output
_____no_output_____
###Markdown
Forecasting from the trained model In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data. X_train is directly followed by the X_testLet's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.The `X_test` and `y_query` below, taken together, form the **forecast request**. The two are interpreted as aligned - `y_query` could actally be a column in `X_test`. `NaN`s in `y_query` are the question marks. These will be filled with the forecasts.When the forecast period immediately follows the training period, the models retain the last few points of data. You can simply fill `y_query` filled with question marks - the model has the data for the lookback already. Typical path: X_test is known, forecast all upcoming periods
###Code
# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00
# These are predictions we are asking the model to make (does not contain thet target column y),
# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data
X_test
y_query = np.repeat(np.NaN, X_test.shape[0])
y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test, y_query)
# xy_nogap contains the predictions in the _automl_target_col column.
# Those same numbers are output in y_pred_no_gap
xy_nogap
###Output
_____no_output_____
###Markdown
Distribution forecastsOften the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. This arises when the forecast is used to control some kind of inventory, for example of grocery items of virtual machines for a cloud service. In such case, the control point is usually something like "we want the item to be in stock and not run out 99% of the time". This is called a "service level". Here is how you get quantile forecasts.
###Code
# specify which quantiles you would like
fitted_model.quantiles = [0.01, 0.5, 0.95]
# use forecast_quantiles function, not the forecast() one
y_pred_quantiles = fitted_model.forecast_quantiles(X_test, y_query)
# it all nicely aligns column-wise
pd.concat([X_test.reset_index(), pd.DataFrame({'query' : y_query}), y_pred_quantiles], axis=1)
###Output
_____no_output_____
###Markdown
Destination-date forecast: "just do something"In some scenarios, the X_test is not known. The forecast is likely to be weak, becaus eit is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to "destination date". The destination date still needs to fit within the maximum horizon from training.
###Code
# We will take the destination date as a last date in the test set.
dest = max(X_test[TIME_COLUMN_NAME])
y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)
# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)
xy_dest
###Output
_____no_output_____
###Markdown
Forecasting away from training dataSuppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model "looks back" -- uses previous values of the target -- then we somehow need to provide those values to the model.The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`).
###Code
# generate the same kind of test data we trained on,
# but now make the train set much longer, so that the test set will be in the future
X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long
test_len=4,
time_column_name=TIME_COLUMN_NAME,
target_column_name=TARGET_COLUMN_NAME,
grain_column_name=GRAIN_COLUMN_NAME,
grains=2)
# end of the data we trained on
print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())
# start of the data we want to predict on
print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())
###Output
_____no_output_____
###Markdown
There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume.
###Code
try:
y_query = y_away.copy()
y_query.fill(np.NaN)
y_pred_away, xy_away = fitted_model.forecast(X_away, y_query)
xy_away
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
How should we read that eror message? The forecast origin is at the last time themodel saw an actual values of `y` (the target). That was at the end of the training data! Because the model received all `NaN` (and not an actual target value), it is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear.
###Code
def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):
"""
This function will take the full dataset, and create the query
to predict all values of the grain from the `forecast_origin`
forward for the next `horizon` horizons. Context from previous
`lookback` periods will be included.
fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.
time_column_name: string which column (must be in fulldata) is the time axis
target_column_name: string which column (must be in fulldata) is to be forecast
forecast_origin: datetime type the last time we (pretend to) have target values
horizon: timedelta how far forward, in time units (not periods)
lookback: timedelta how far back does the model look?
Example:
```
forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training
print(forecast_origin)
X_query, y_query = make_forecasting_query(data,
forecast_origin = forecast_origin,
horizon = pd.DateOffset(days=7), # 7 days into the future
lookback = pd.DateOffset(days=1), # model has lag 1 period (day)
)
```
"""
X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &
(fulldata[ time_column_name ] <= forecast_origin)
]
X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &
(fulldata[ time_column_name ] <= forecast_origin + horizon)
]
y_past = X_past.pop(target_column_name).values.astype(np.float)
y_future = X_future.pop(target_column_name).values.astype(np.float)
# Now take y_future and turn it into question marks
y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int
y_query.fill(np.NaN)
print("X_past is " + str(X_past.shape) + " - shaped")
print("X_future is " + str(X_future.shape) + " - shaped")
print("y_past is " + str(y_past.shape) + " - shaped")
print("y_query is " + str(y_query.shape) + " - shaped")
X_pred = pd.concat([X_past, X_future])
y_pred = np.concatenate([y_past, y_query])
return X_pred, y_pred
###Output
_____no_output_____
###Markdown
Let's see where the context data ends - it ends, by construction, just before the testing data starts.
###Code
print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
print( X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))
X_context.tail(5)
# Since the length of the lookback is 3,
# we need to add 3 periods from the context to the request
# so that the model has the data it needs
# Put the X and y back together for a while.
# They like each other and it makes them happy.
X_context[TARGET_COLUMN_NAME] = y_context
X_away[TARGET_COLUMN_NAME] = y_away
fulldata = pd.concat([X_context, X_away])
# forecast origin is the last point of data, which is one 1-hr period before test
forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)
# it is indeed the last point of the context
assert forecast_origin == X_context[TIME_COLUMN_NAME].max()
print("Forecast origin: " + str(forecast_origin))
# the model uses lags and rolling windows to look back in time
n_lookback_periods = max(max(lags), rolling_window_length)
lookback = pd.DateOffset(hours=n_lookback_periods)
horizon = pd.DateOffset(hours=max_horizon)
# now make the forecast query from context (refer to figure)
X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,
forecast_origin, horizon, lookback)
# show the forecast request aligned
X_show = X_pred.copy()
X_show[TARGET_COLUMN_NAME] = y_pred
X_show
###Output
_____no_output_____
###Markdown
Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast.
###Code
# Now everything works
y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)
# show the forecast aligned
X_show = xy_away.reset_index()
# without the generated features
X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]
# prediction is in _automl_target_col
###Output
_____no_output_____ |
final_notebooks/.ipynb_checkpoints/final_BO-checkpoint.ipynb | ###Markdown
This notebook demonstrates how to use RcTorch to find optimal hyper-paramters for the differential equation $\dot y + q(t) y = f(t) $.Simple population: $\dot y + y =0$ * Analytical solution: $y = y_0 e^{-t}$
###Code
#define a reparameterization function, empirically we find that g= 1-e^(-t) works well)
def reparam(t, order = 1):
exp_t = torch.exp(-t)
derivatives_of_g = []
g = 1 - exp_t
g_dot = 1 - g
return g, g_dot
def plot_results(RC, results, integrator_model, ax = None):
"""plots a RC prediction and integrator model prediction for comparison
Parameters
----------
RC: RcTorchPrivate.esn
the RcTorch echostate network to evaluate. This model should already have been fit.
results: dictionary
the dictionary of results returned by the RC after fitting
integrator model: function
the model to be passed to odeint which is a gold standard integrator numerical method
for solving ODE's written in Fortran. You may find the documentation here:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html
ax: matplotlib.axes._subplots.AxesSubplot
If provided, the function will plot on this subplot axes
"""
X = RC.X.detach().cpu()
#int_sols = []
if not ax:
fig, ax = plt.subplots(1,1, figsize = (6,6))
for i, y in enumerate(results["ys"]):
if not i:
labels = ["RC", "integrator solution"]
else:
labels = [None, None]
y = y.detach().cpu()
ax.plot(X, y, color = "blue", label = labels[0])
#calculate the integrator prediction:
int_sol = odeint(integrator_model, y0s[i], np.array(X.cpu().squeeze()))
int_sol = torch.tensor(int_sol)
#int_sols.append(int_sol)
#plot the integrator prediction
ax.plot(X, int_sol, '--', color = "red", alpha = 0.9, label = labels[1])
ax.set_xlabel("time")
ax.set_ylabel("y")
ax.legend();
#return int_sols
def plot_rmsr(RC, results, force, log = False, ax = None, RMSR = True):
"""plots the root mean square residuals (RMSR) of a RC prediction directly from the loss function
Parameters
----------
RC: RcTorchPrivate.esn
the RcTorch echostate network to evaluate. This model should already have been fit.
results: dictionary
the dictionary of results returned by the RC after fitting
force: function
the force function describing the force term in the population equation
ax: matplotlib.axes._subplots.AxesSubplot
If provided, the function will plot on this subplot axes
"""
if not ax:
fig, ax = plt.subplots(1,1, figsize = (10, 4))
X = RC.X.detach().cpu()
ys, ydots = results["ys"], results["ydots"]
residuals = []
for i, y in enumerate(ys):
y = y.detach().cpu()
ydot = ydots[i].detach().cpu()
resids = custom_loss(X, y, ydot, None,
force = force,
ode_coefs = RC.ode_coefs,
mean = False, reg = False)
rmsr = torch.sqrt(resids)
if not i:
rmsr_tensor = rmsr
label = "individual trajectory rmsr"
else:
rmsr_tensor = torch.cat((rmsr_tensor, rmsr), axis = 1)
label = None
if log:
rmsr = torch.log10(rmsr)
ax.plot(X, rmsr, color = "red", alpha = 0.4, label = label)
residuals.append(resids)
mean_rmsr = torch.mean(rmsr_tensor, axis =1)
if log:
mean_rmsr = torch.log10(mean_rmsr)
ax.plot(X, mean_rmsr,
color = "blue",
alpha = 0.9,
label = "mean rmsr")
ax.legend();
ax.set_xlabel("time")
if log:
ax.set_ylabel("log rmsr")
else:
ax.set_ylabel("rmsr")
print(torch.mean(mean_rmsr))
def force(X, A = 0):
return torch.zeros_like(X)#A*torch.sin(X)
lam =1
def custom_loss(X , y, ydot, out_weights, lam = lam, force = force, reg = False,
ode_coefs = None, init_conds = None,
enet_alpha = None, enet_strength =None, mean = True):
#with paramization
L = ydot + lam * y - force(X)
if reg:
#assert False
weight_size_sq = torch.mean(torch.square(out_weights))
weight_size_L1 = torch.mean(torch.abs(out_weights))
L_reg = enet_strength*(enet_alpha * weight_size_sq + (1- enet_alpha) * weight_size_L1)
L = L + 0.1 * L_reg
L = torch.square(L)
if mean:
L = torch.mean(L)
return L
#declare the bounds dict. We search for the variables within the specified bounds.
# if a variable is declared as a float or integer like n_nodes or dt, these variables are fixed.
bounds_dict = {"connectivity" : (-2, -0.5), #log space
"spectral_radius" : (1, 2), #lin space
"n_nodes" : 250,
"regularization" : (-4, 4), #log space
"leaking_rate" : (0, 1), #linear space
"dt" : -2.75, #log space
"bias": (-0.75,0.75) #linear space
}
#set up data
BURN_IN = 500 #how many time points of states to throw away before starting optimization.
x0, xf = 0, 5
nsteps = int(abs(xf - x0)/(10**bounds_dict["dt"]))
xtrain = torch.linspace(x0, xf, nsteps, requires_grad=False).view(-1,1)
int(xtrain.shape[0] * 0.5)
#declare the initial conditions (each initial condition corresponds to a different curve)
y0s = np.arange(0.1, 10.1, 0.1)
%%time
#declare the esn_cv optimizer: this class will run bayesian optimization to optimize the bounds dict.
#for more information see the github.
esn_cv = EchoStateNetworkCV(bounds = bounds_dict,
interactive = True,
batch_size = 1, #batch size is parallel
cv_samples = 2, #number of cv_samples, random start points
initial_samples = 100, #number of random samples before optimization starts
subsequence_length = int(xtrain.shape[0] * 0.8), #combine len of tr + val sets
validate_fraction = 0.3, #validation prop of tr+val sets
log_score = True, #log-residuals
random_seed = 209, # random seed
ODE_order = 1, #order of eq
esn_burn_in = BURN_IN, #states to throw away before calculating output
#see turbo ref:
length_min = 2 **(-7),
success_tolerance = 10,
)
#optimize the network:
opt_hps = esn_cv.optimize(x = xtrain,
reparam_f = reparam,
ODE_criterion = custom_loss,
init_conditions = [y0s],
force = force,
ode_coefs = [1,1],
backprop_f = None,
n_outputs = 1,
eq_system = False)
# some particularly good runs:
# opt_hps = {'dt': 0.0031622776601683794,
# 'n_nodes': 250,
# 'connectivity': 0.7170604557008349,
# 'spectral_radius': 1.5755887031555176,
# 'regularization': 0.00034441529823729916,
# 'leaking_rate': 0.9272222518920898,
# 'bias': 0.1780446171760559}
# opt_hps = {'dt': 0.0017782794100389228,
# 'n_nodes': 250,
# 'connectivity': 0.11197846061157432,
# 'spectral_radius': 1.7452095746994019,
# 'regularization': 0.00012929296298723957,
# 'leaking_rate': 0.7733328938484192,
# 'bias': 0.1652531623840332}
opt_hps
y0s = np.arange(-10, 10.1, 1)
RC = EchoStateNetwork(**opt_hps,
random_state = 209,
dtype = torch.float32)
train_args = {"X" : xtrain.view(-1,1),
"burn_in" : int(BURN_IN),
"ODE_order" : 1,
"force" : force,
"reparam_f" : reparam,
"ode_coefs" : [1, 1]}
results = RC.fit(init_conditions = [y0s,1],
SOLVE = True,
train_score = True,
ODE_criterion = custom_loss,
**train_args)
def simple_pop(y, t, t_pow = 0, force_k = 0, k = 1):
dydt = -k * y *t**t_pow + force_k*np.sin(t)
return dydt
plot_results(RC, results, simple_pop)
plot_rmsr(RC, results, force = force, log = True)
end_time = time.time()
print(f'Total notebook runtime: {end_time - start_time:.2f} seconds')
###Output
_____no_output_____ |
v1/Db2 Jupyter Extensions Tutorial.ipynb | ###Markdown
Db2 Jupyter Notebook Extensions TutorialThe SQL code tutorials for Db2 rely on a Jupyter notebook extension, commonly refer to as a "magic" command. The beginning of all of the notebooks begin with the following command which will load the extension and allow the remainder of the notebook to use the %sql magic command.&37;run db2.ipynbThe cell below will load the Db2 extension. Note that it will take a few seconds for the extension to load, so you should generally wait until the "Db2 Extensions Loaded" message is displayed in your notebook. In the event you get an error on the load of the ibm_db library, modify the command to include the -update option:```run db2.ipynb -update```
###Code
%run db2.ipynb
###Output
_____no_output_____
###Markdown
Connections to Db2Before any SQL commands can be issued, a connection needs to be made to the Db2 database that you will be using. The connection can be done manually (through the use of the CONNECT command), or automatically when the first `%sql` command is issued.The Db2 magic command tracks whether or not a connection has occured in the past and saves this information between notebooks and sessions. When you start up a notebook and issue a command, the program will reconnect to the database using your credentials from the last session. In the event that you have not connected before, the system will prompt you for all the information it needs to connect. This information includes:- Database name (SAMPLE) - Hostname - localhost (enter an IP address if you need to connect to a remote server) - PORT - 50000 (this is the default but it could be different) - Userid - DB2INST1 - Password - No password is provided so you have to enter a value - Maximum Rows - 10 lines of output are displayed when a result set is returned There will be default values presented in the panels that you can accept, or enter your own values. All of the information will be stored in the directory that the notebooks are stored on. Once you have entered the information, the system will attempt to connect to the database for you and then you can run all of the SQL scripts. More details on the CONNECT syntax will be found in a section below.The next statement will force a CONNECT to occur with the default values. If you have not connected before, it will prompt you for the information.
###Code
%sql CONNECT
###Output
_____no_output_____
###Markdown
Line versus Cell CommandThe Db2 extension is made up of one magic command that works either at the LINE level (`%sql`) or at the CELL level (`%%sql`). If you only want to execute a SQL command on one line in your script, use the %sql form of the command. If you want to run a larger block of SQL, then use the `%%sql` form. Note that when you use the `%%sql` form of the command, the entire contents of the cell is considered part of the command, so you cannot mix other commands in the cell.The following is an example of a line command:
###Code
%sql VALUES 'HELLO THERE'
###Output
_____no_output_____
###Markdown
If you have SQL that requires multiple lines, of if you need to execute many lines of SQL, then you should be using the CELL version of the `%sql` command. To start a block of SQL, start the cell with `%%sql` and do not place any SQL following the command. Subsequent lines can contain SQL code, with each SQL statement delimited with the semicolon (`;`). You can change the delimiter if required for procedures, etc... More details on this later.
###Code
%%sql
VALUES
1,
2,
3
###Output
_____no_output_____
###Markdown
If you are using a single statement then there is no need to use a delimiter. However, if you are combining a number of commands then you must use the semicolon.
###Code
%%sql
DROP TABLE STUFF;
CREATE TABLE STUFF (A INT);
INSERT INTO STUFF VALUES
1,2,3;
SELECT * FROM STUFF;
###Output
_____no_output_____
###Markdown
The script will generate messages and output as it executes. Each SQL statement that generates results will have a table displayed with the result set. If a command is executed, the results of the execution get listed as well. The script you just ran probably generated an error on the DROP table command. OptionsBoth forms of the `%sql` command have options that can be used to change the behavior of the code. For both forms of the command (`%sql`, `%%sql`), the options must be on the same line as the command:%sql -t ...%%sql -tThe only difference is that the `%sql` command can have SQL following the parameters, while the `%%sql` requires the SQL to be placed on subsequent lines.There are a number of parameters that you can specify as part of the `%sql` statement. * -d - Use alternative delimiter* -t - Time the statement execution* -q - Suppress messages * -j - JSON formatting of a column* -a - Show all output* -pb - Bar chart of results* -pp - Pie chart of results * -pl - Line chart of results* -i - Interactive mode with Pixiedust* -sampledata Load the database with the sample EMPLOYEE and DEPARTMENT tables* -r - Return the results into a variable (list of rows)Multiple parameters are allowed on a command line. Each option should be separated by a space:%sql -a -j ...A SELECT statement will return the results as a dataframe and display the results as a table in the notebook. If you use the assignment statement, the dataframe will be placed into the variable and the results will not be displayed:r = %sql SELECT * FROM EMPLOYEEThe sections below will explain the options in more detail. DelimitersThe default delimiter for all SQL statements is the semicolon. However, this becomes a problem when you try to create a trigger, function, or procedure that uses SQLPL (or PL/SQL). Use the -d option to turn the SQL delimiter into the at (`@`) sign and -q to suppress error messages. The semi-colon is then ignored as a delimiter.For example, the following SQL will use the `@` sign as the delimiter.
###Code
%%sql -d -q
DROP TABLE STUFF
@
CREATE TABLE STUFF (A INT)
@
INSERT INTO STUFF VALUES
1,2,3
@
SELECT * FROM STUFF
@
###Output
_____no_output_____
###Markdown
The delimiter change will only take place for the statements following the `%%sql` command. Subsequent cellsin the notebook will still use the semicolon. You must use the -d option for every cell that needs to use thesemicolon in the script. Limiting Result SetsThe default number of rows displayed for any result set is 10. You have the option of changing this option when initially connecting to the database. If you want to override the number of rows display you can either updatethe control variable, or use the -a option. The -a option will display all of the rows in the answer set. For instance, the following SQL will only show 10 rows even though we inserted 15 values:
###Code
%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
###Output
_____no_output_____
###Markdown
You will notice that the displayed result will split the visible rows to the first 5 rows and the last 5 rows.Using the -a option will display all values:
###Code
%sql -a values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
###Output
_____no_output_____
###Markdown
To change the default value of rows displayed, you can either do a CONNECT RESET (discussed later) or set theDb2 control variable maxrows to a different value. A value of -1 will display all rows.
###Code
# Save previous version of maximum rows
last_max = _settings['maxrows']
_settings['maxrows'] = 5
%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
###Output
_____no_output_____
###Markdown
A special note regarding the output from a SELECT statement. If the SQL statement is the last line of a block, the results will be displayed by default (unless you assigned the results to a variable). If the SQL is in the middle of a block of statements, the results will not be displayed. To explicitly display the results you must use the display function (or pDisplay if you have imported another library like pixiedust which overrides the pandas display function).
###Code
# Set the maximum back
_settings['maxrows'] = last_max
%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
###Output
_____no_output_____
###Markdown
Quiet ModeEvery SQL statement will result in some output. You will either get an answer set (SELECT), or an indication ifthe command worked. For instance, the following set of SQL will generate some error messages since the tables will probably not exist:
###Code
%%sql
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG;
###Output
_____no_output_____
###Markdown
If you know that these errors may occur you can silence them with the -q option.
###Code
%%sql -q
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG;
###Output
_____no_output_____
###Markdown
SQL output will not be suppressed, so the following command will still show the results.
###Code
%%sql -q
DROP TABLE TABLE_NOT_FOUND;
DROP TABLE TABLE_SPELLED_WRONG;
VALUES 1,2,3;
###Output
_____no_output_____
###Markdown
Variables in %sql Blocks The `%sql` syntax allows you to pass local variables to a script. There are 5 predefined variables defined in the program:- database - The name of the database you are connected to- uid - The userid that you connected with- host = The IP address of the host system- port - The port number of the host system- max - The maximum number of rows to return in an answer setThese variables are all part of a structure called _settings. To pass a value to a LINE script, use the braces {} to surround the name of the variable: {_settings["database"]}The next line will display the currently connected database.
###Code
%sql VALUES '{_settings["database"]}'
###Output
_____no_output_____
###Markdown
You cannot use variable substitution with the CELL version of the `%%sql` command. If your SQL statement extends beyond one line, and you want to use variable substitution, you can use a couple of techniques to make it look like one line. The simplest way is to add the backslash character (```\```) at the end of every line. The following example illustrates the technique.
###Code
empno = '000010'
%sql SELECT LASTNAME FROM \
EMPLOYEE \
WHERE \
EMPNO = '{empno}'
###Output
_____no_output_____
###Markdown
The other option for passing variables to a `%sql` or `%%sql` statement is to use the embedded variable format. This requires that the variable be prefixed with a colon (`:`) in front of it. When using this format, you do not need to use quote characters around the variables since its value is extracted at run time. The first example uses the value of the variable.
###Code
empno = '000010'
%sql select lastname from employee where empno='{empno}'
###Output
_____no_output_____
###Markdown
This example uses the embedded variable name (`:empno`).
###Code
%sql select lastname from employee where empno=:empno
###Output
_____no_output_____
###Markdown
Timing SQL StatementsSometimes you want to see how the execution of a statement changes with the addition of indexes or otheroptimization changes. The -t option will run the statement on the LINE or one SQL statement in the CELL for exactly one second. The results will be displayed and optionally placed into a variable. The syntax of thecommand is:sql_time = %sql -t SELECT * FROM EMPLOYEEFor instance, the following SQL will time the VALUES clause.
###Code
%sql -t VALUES 1,2,3,4,5,6,7,8,9
###Output
_____no_output_____
###Markdown
When timing a statement, no output will be displayed. If your SQL statement takes longer than one second youwill need to modify the db2 _runtime variable. This variable must be set to the number of seconds that youwant to run the statement.
###Code
_runtime = 5
%sql -t VALUES 1,2,3,4,5,6,7,8,9
###Output
_____no_output_____
###Markdown
JSON FormattingDb2 supports querying JSON that is stored in a column within a table. Standard output would just display the JSON as a string. For instance, the following statement would just return a large string of output.
###Code
%%sql
VALUES
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"salary":152750.00,
"bonus":1000.00,
"comm":4220.00}
}'
###Output
_____no_output_____
###Markdown
Adding the -j option to the %sql (or %%sql) command will format the first column of a return set to betterdisplay the structure of the document. Note that if your answer set has additional columns associated with it, they will not be displayed in this format.
###Code
%%sql -j
VALUES
'{
"empno":"000010",
"firstnme":"CHRISTINE",
"midinit":"I",
"lastname":"HAAS",
"workdept":"A00",
"phoneno":[3978],
"hiredate":"01/01/1995",
"job":"PRES",
"edlevel":18,
"sex":"F",
"birthdate":"08/24/1963",
"pay" : {
"salary":152750.00,
"bonus":1000.00,
"comm":4220.00}
}'
###Output
_____no_output_____
###Markdown
PlottingSometimes it would be useful to display a result set as either a bar, pie, or line chart. The first one or twocolumns of a result set need to contain the values need to plot the information.The three possible plot options are: * -pb - bar chart (x,y)* -pp - pie chart (y)* -pl - line chart (x,y)The following data will be used to demonstrate the different charting options.
###Code
%sql values 1,2,3,4,5
###Output
_____no_output_____
###Markdown
Since the results only have one column, the pie, line, and bar charts will not have any labels associated withthem. The first example is a bar chart.
###Code
%sql -pb values 1,2,3,4,5
###Output
_____no_output_____
###Markdown
The same data as a pie chart.
###Code
%sql -pp values 1,2,3,4,5
###Output
_____no_output_____
###Markdown
And finally a line chart.
###Code
%sql -pl values 1,2,3,4,5
###Output
_____no_output_____
###Markdown
If you retrieve two columns of information, the first column is used for the labels (X axis or pie slices) and the second column contains the data.
###Code
%sql -pb values ('A',1),('B',2),('C',3),('D',4),('E',5)
###Output
_____no_output_____
###Markdown
For a pie chart, the first column is used to label the slices, while the data comes from the second column.
###Code
%sql -pp values ('A',1),('B',2),('C',3),('D',4),('E',5)
###Output
_____no_output_____
###Markdown
Finally, for a line chart, the x contains the labels and the y values are used.
###Code
%sql -pl values ('A',1),('B',2),('C',3),('D',4),('E',5)
###Output
_____no_output_____
###Markdown
The following SQL will plot the number of employees per department.
###Code
%%sql -pb
SELECT WORKDEPT, COUNT(*)
FROM EMPLOYEE
GROUP BY WORKDEPT
###Output
_____no_output_____
###Markdown
The final option for plotting data is to use interactive mode `-i`. This will display the data using an open-source project called Pixiedust. You can view the results in a table and then interactively create a plot by dragging and dropping column names into the appropriate slot. The next command will place you into interactive mode.
###Code
%sql -i select * from employee
###Output
_____no_output_____
###Markdown
Sample DataMany of the Db2 notebooks depend on two of the tables that are found in the SAMPLE database. Rather thanhaving to create the entire SAMPLE database, this option will create and populate the EMPLOYEE and DEPARTMENT tables in your database. Note that if you already have these tables defined, they will not be dropped.
###Code
%sql -sampledata
###Output
_____no_output_____
###Markdown
Result Sets By default, any `%sql` block will return the contents of a result set as a table that is displayed in the notebook. The results are displayed using a feature of pandas dataframes. The following select statement demonstrates a simple result set.
###Code
%sql select * from employee fetch first 3 rows only
###Output
_____no_output_____
###Markdown
You can assign the result set directly to a variable.
###Code
x = %sql select * from employee fetch first 3 rows only
###Output
_____no_output_____
###Markdown
The variable x contains the dataframe that was produced by the `%sql` statement so you access the result set by using this variable or display the contents by just referring to it in a command line.
###Code
x
###Output
_____no_output_____
###Markdown
There is an additional way of capturing the data through the use of the `-r` flag.var = %sql -r select * from employeeRather than returning a dataframe result set, this option will produce a list of rows. Each row is a list itself. The rows and columns all start at zero (0), so to access the first column of the first row, you would use var[0][0] to access it.
###Code
rows = %sql -r select * from employee fetch first 3 rows only
print(rows[0][0])
###Output
_____no_output_____
###Markdown
The number of rows in the result set can be determined by using the length function.
###Code
print(len(rows))
###Output
_____no_output_____
###Markdown
If you want to iterate over all of the rows and columns, you could use the following Python syntax instead ofcreating a for loop that goes from 0 to 41.
###Code
for row in rows:
line = ""
for col in row:
line = line + str(col) + ","
print(line)
###Output
_____no_output_____
###Markdown
Since the data may be returned in different formats (like integers), you should use the str() functionto convert the values to strings. Otherwise, the concatenation function used in the above example will fail. Forinstance, the 6th field is a birthdate field. If you retrieve it as an individual value and try and concatenate a string to it, you get the following error.
###Code
print("Birth Date="+rows[0][6])
###Output
_____no_output_____
###Markdown
You can fix this problem by adding the str function to convert the date.
###Code
print("Birth Date="+str(rows[0][6]))
###Output
_____no_output_____
###Markdown
Db2 CONNECT StatementAs mentioned at the beginning of this notebook, connecting to Db2 is automatically done when you issue your first`%sql` statement. Usually the program will prompt you with what options you want when connecting to a database. The other option is to use the CONNECT statement directly. The CONNECT statement is similar to the native Db2CONNECT command, but includes some options that allow you to connect to databases that has not beencatalogued locally.The CONNECT command has the following format:%sql CONNECT TO <database> USER <userid> USING <password | ?> HOST <ip address> PORT <port number>If you use a "?" for the password field, the system will prompt you for a password. This avoids typing the password as clear text on the screen. If a connection is not successful, the system will print the errormessage associated with the connect request.If the connection is successful, the parameters are saved on your system and will be used the next time yourun a SQL statement, or when you issue the %sql CONNECT command with no parameters.If you want to force the program to connect to a different database (with prompting), use the CONNECT RESET command. The next time you run a SQL statement, the program will prompt you for the the connectionand will force the program to reconnect the next time a SQL statement is executed.
###Code
%sql CONNECT RESET
%sql CONNECT
###Output
_____no_output_____ |
notebooks/Sparse.ipynb | ###Markdown
Running `H.stark_map()` is essentially equivilent to:
###Code
field_au = 100.0 * field * e * a0 / En_h # atomic units
stark_map = np.array([linalg.eigvalsh(H.total(Fz=f).toarray()) for f in tqdm(field_au)])
# plot
for level in stark_map.T:
plt.plot(field, level, c="k", alpha=0.1)
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
%timeit linalg.eigvalsh(H.total(Fz=2.5e-10).toarray())
###Output
287 ms ± 13.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Sometimes a large basis is required even if only a fraction of the eigenvalues are of interest. Fortunately, it is possible to compute a subset of the eignevalues. Partial diagonalization of the sparse matrix saves memory and–for a small subset–*can* be significantly faster than full diagonalization.
###Code
subset = list(H.basis.argwhere(lambda x: x.n == 36))
num_states = len(subset)
e0 = np.median(H.e0().diagonal()[subset])
sparse_map = np.array(
[
sp.linalg.eigsh(
H.total(Fz=f), k=num_states, sigma=e0, return_eigenvectors=False
)
for f in tqdm(field_au)
]
)
# plot
for level in sparse_map.T:
plt.plot(field, level, c="k", alpha=0.5)
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
###Output
100%|███████████████████████████████████████████| 20/20 [00:03<00:00, 6.14it/s]
###Markdown
Compare sparse and dense calculations:
###Code
# dense
for i in subset:
plt.plot(field, stark_map[:, i], c="k")
# sparse
for level in sparse_map.T:
plt.plot(-field, level, c="r")
plt.ylim(np.min(sparse_map), np.max(sparse_map))
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
###Output
_____no_output_____
###Markdown
Note, if the subset contains a partial $n$ manifold, exactly which eigenvalues are computed can vary abruptly from one field to the next. This can get confusing when attempting to trace states through crossings.
###Code
num_states = 12
sparse_map_12 = np.array(
[
sp.linalg.eigsh(
H.total(Fz=f), k=num_states, sigma=e0, return_eigenvectors=False
)
for f in tqdm(field_au)
]
)
# dense
for i in subset:
plt.plot(field, stark_map[:, i], c="k", alpha=0.2)
# sparse
for level in sparse_map_12.T:
plt.plot(field, level, c="r", ls="", marker="o", alpha=0.5)
plt.ylim(np.min(sparse_map_12), np.max(sparse_map_12))
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
%timeit sp.linalg.eigsh(H.total(Fz=2.5e-10), k=num_states, sigma=e0, return_eigenvectors=False)
###Output
22.8 ms ± 3.59 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
However, the sparse calculation cannot compute all of the eigenvectors (`k < num_states`). If a sizable fraction of the eigenvalues are required, it is much faster to use a dense matrix.
###Code
# dense
%time linalg.eigvalsh(H.total(Fz=2.5e-10).toarray())
# sparse
k = H.basis.num_states - 1
%time sp.linalg.eigsh(H.total(Fz=2.5e-10), k=k, return_eigenvectors=False)
###Output
CPU times: user 5.29 s, sys: 68 ms, total: 5.36 s
Wall time: 2.69 s
###Markdown
Running `H.stark_map()` is essentially equivilent to:
###Code
field_au = 100.0 * field * e * a0 / En_h # atomic units
stark_map = np.array(
[linalg.eigvalsh(H.matrix(Fz=f).toarray()) for f in tqdm(field_au)]
)
# plot
for level in stark_map.T:
plt.plot(field, level, c="k", alpha=0.1)
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
%timeit linalg.eigvalsh(H.matrix(Fz=2.5e-10).toarray())
###Output
304 ms ± 56.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Sometimes a large basis is required to get accurate energy levels, even if only a fraction of the eigenvalues are of interest. Fortunately, it is possible to compute a subset of the eignevalues, which is often a little faster. Moreover, the total Hamiltonian matrix can remain sparse, potentially saving a lot of memory.
###Code
subset = list(H.basis.argwhere(lambda x: x.n == 36))
k = len(subset)
sigma = np.median(H.e0[subset])
sparse_map = np.array(
[
sp.linalg.eigsh(H.matrix(Fz=f), k=k, sigma=sigma, return_eigenvectors=False)
for f in tqdm(field_au)
]
)
# plot
for level in sparse_map.T:
plt.plot(field, level, c="k", alpha=0.5)
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
###Output
100%|██████████| 20/20 [00:03<00:00, 6.61it/s]
###Markdown
(The above calculation can probably be sped up by moving some of the shift-inversion steps outside of the loop.) Compare sparse and dense calculations:
###Code
# dense
for i in subset:
plt.plot(field, stark_map[:, i], c="k")
# sparse
for level in sparse_map.T:
plt.plot(-field, level, c="r")
plt.ylim(np.min(sparse_map), np.max(sparse_map))
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
###Output
_____no_output_____
###Markdown
If the subset contains a partial $n$ manifold, exactly which eigenvalues are computed can vary abruptly from one field to the next. This can get confusing when attempting to trace states through crossings.
###Code
k = 12
sparse_map_12 = np.array(
[
sp.linalg.eigsh(H.matrix(Fz=f), k=k, sigma=sigma, return_eigenvectors=False)
for f in tqdm(field_au)
]
)
# dense
for i in subset:
plt.plot(field, stark_map[:, i], c="k", alpha=0.2)
# sparse
for level in sparse_map_12.T:
plt.plot(field, level, c="r", ls="", marker="o", alpha=0.5)
plt.ylim(np.min(sparse_map_12), np.max(sparse_map_12))
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
%timeit sp.linalg.eigsh(H.matrix(Fz=2.5e-10), k=k, sigma=sigma, return_eigenvectors=False)
###Output
21.4 ms ± 1.33 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Running `H.stark_map()` is essentially equivilent to:
###Code
field_au = 100.0 * field * e * a0 / En_h # atomic units
stark_map = np.array(
[linalg.eigvalsh(H.total(Fz=f).toarray()) for f in tqdm(field_au)]
)
# plot
for level in stark_map.T:
plt.plot(field, level, c="k", alpha=0.1)
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
%timeit linalg.eigvalsh(H.total(Fz=2.5e-10).toarray())
###Output
445 ms ± 63.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Sometimes a large basis is required even if only a fraction of the eigenvalues are of interest. Fortunately, it is possible to compute a subset of the eignevalues. Partial diagonalization of the sparse matrix saves memory and–for a small subset–*can* be significantly faster than full diagonalization.
###Code
subset = list(H.basis.argwhere(lambda x: x.n == 36))
num_states = len(subset)
e0 = np.median(H.e0().diagonal()[subset])
sparse_map = np.array(
[
sp.linalg.eigsh(H.total(Fz=f), k=num_states, sigma=e0, return_eigenvectors=False)
for f in tqdm(field_au)
]
)
# plot
for level in sparse_map.T:
plt.plot(field, level, c="k", alpha=0.5)
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
###Output
100%|███████████████████████████████████████████| 20/20 [00:04<00:00, 4.35it/s]
###Markdown
Compare sparse and dense calculations:
###Code
# dense
for i in subset:
plt.plot(field, stark_map[:, i], c="k")
# sparse
for level in sparse_map.T:
plt.plot(-field, level, c="r")
plt.ylim(np.min(sparse_map), np.max(sparse_map))
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
###Output
_____no_output_____
###Markdown
If the subset contains a partial $n$ manifold, exactly which eigenvalues are computed can vary abruptly from one field to the next. This can get confusing when attempting to trace states through crossings.
###Code
num_states = 12
sparse_map_12 = np.array(
[
sp.linalg.eigsh(H.total(Fz=f), k=num_states, sigma=e0, return_eigenvectors=False)
for f in tqdm(field_au)
]
)
# dense
for i in subset:
plt.plot(field, stark_map[:, i], c="k", alpha=0.2)
# sparse
for level in sparse_map_12.T:
plt.plot(field, level, c="r", ls="", marker="o", alpha=0.5)
plt.ylim(np.min(sparse_map_12), np.max(sparse_map_12))
plt.xlabel("electric field (V / cm)")
plt.ylabel("energy (a. u.)")
plt.show()
%timeit sp.linalg.eigsh(H.total(Fz=2.5e-10), k=num_states, sigma=e0, return_eigenvectors=False)
###Output
29 ms ± 5.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
However, the sparse calculation cannot compute all of the eigenvectors (`k < num_states`). If a sizable fraction of the eigenvalues are required, it is much faster to diagonalise a dense matrix.
###Code
# dense
%time linalg.eigvalsh(H.total(Fz=2.5e-10).toarray())
# sparse
k = H.basis.num_states - 1
%time sp.linalg.eigsh(H.total(Fz=2.5e-10), k=k, return_eigenvectors=False)
###Output
CPU times: user 5.95 s, sys: 104 ms, total: 6.05 s
Wall time: 3.09 s
|
Comparison_of_Numerical_Methods_for_Differential_Equations.ipynb | ###Markdown
Comparison of Numerical Methods for Differential Equations Numerical Solutions presented are* Euler's Method* Heun's Method* Rung-Kutta Method of 4th OrderFor the Differential Equation:> $$\dfrac{dy(t)}{dt}=-y(t)t+\dfrac{\cos{2t}}{t}$$With Analytical Solution:> $$y(t)=\dfrac{\sin{t}\cos{t}}{t}$$
###Code
import matplotlib.pyplot as plt
import numpy as np
from numpy import sin, cos
from scipy import optimize
def f(t,y):
return -y/t + cos(2*t)/t
t_i = 0 #Already implied in the vector t construction, will be used as 1e-100 to exclude the blow-up of the funtion at the origin
y_i = 1 #Will be inserted to be used through the propagation
def tRange(N):
return np.linspace(1e-100,5,N)
def AnalSol(t):
return sin(t)*cos(t)/t
def euler(un,tn,h,f):
return un+h*f(tn, un)
def heun(un,tn,h,f):
return un + h/2*(f(tn,un)+f(tn+h, un+h*f(tn,un)))
def RK4(un,tn,h,f):
k1=f(tn,un)
k2=f(tn+.5*h,un+.5*h*k1)
k3=f(tn+.5*h,un+.5*h*k2)
k4=f(tn+h,un+h*k3)
k=(1/6)*(k1+2*k2+2*k3+k4)
return un+h*k
#Euler Forward Method
N = np.array([5, 10, 15, 20, 25])+1
t_smooth = np.arange(1e-100,5.01,0.1)
y_Analytical = AnalSol(t_smooth)
y_EulerAll = []
y_HeunAll = []
y_RK4All = []
for i in range(5):
t = tRange(N[i])
h = t[1] #given that t[0]=0
y_Euler = [y_i]
y_Heun = [y_i]
y_RK4 = [y_i]
for j in range(0,len(t)-1):
y_Euler.append(euler(y_Euler[j], t[j], h, f))
y_Heun.append(heun(y_Heun[j], t[j], h, f))
y_RK4.append(RK4(y_RK4[j], t[j], h, f))
y_EulerAll.append(y_Euler)
y_HeunAll.append(y_Heun)
y_RK4All.append(y_RK4)
#Error Analysis
testN = np.arange(2,303,20)+1
EulerError = np.zeros_like(testN,dtype=float)
HeunError = np.zeros_like(testN,dtype=float)
RK4Error = np.zeros_like(testN,dtype=float)
for i in range(len(testN)):
t = tRange(testN[i])
h = t[1] #given that t[0]=0
y_Euler = [y_i]
y_Heun = [y_i]
y_RK4 = [y_i]
for j in range(0,len(t)-1):
y_Euler.append(euler(y_Euler[j], t[j], h, f))
y_Heun.append(heun(y_Heun[j], t[j], h, f))
y_RK4.append(RK4(y_RK4[j], t[j], h, f))
EulerError[i] = np.absolute(y_Euler[-1] - y_Analytical[-1])
HeunError[i] = np.absolute(y_Heun[-1] - y_Analytical[-1])
RK4Error[i] = np.absolute(y_RK4[-1] - y_Analytical[-1])
def linearFit(x,a,b): #To fit the error trend
return a*x+b
testN1 = testN-1
logTestN = np.log(testN1)
#plotting
fig = plt.figure(figsize=(16,5))
methods = [y_EulerAll, y_HeunAll, y_RK4All]
titles = ['Euler Forward', 'Heun', 'Runge-Kutta 4th', 'Deviation of num. solutions.']
colors = ['g','r','b','m','c']
markers = ['-o','-^','-s','-p','-*']
for i in range(3):
plt.subplot(1,4,i+1)
y = methods[i]
plt.plot(t_smooth, y_Analytical, label = 'Analytical', c='k')
for j in range(5):
plt.plot(tRange(N[j]),methods[i][j], markers[j],c=colors[j],label = 'N = '+str(N[j]-1),alpha=0.9)
plt.grid(alpha=0.2)
plt.title(titles[i],fontsize = 15)
plt.ylabel(r'$y(t)$')
plt.xlabel(r'$t$')
plt.ylim(-0.6,1.2)
plt.legend()
logErrors = np.log(np.array([EulerError, HeunError, RK4Error]))
# logEuErr = np.logspace(EulerError[0],EulerError[-1], len(EulerError))
# logHeErr = np.logspace(HeunError[0], HeunError[-1], len(HeunError))
# logRKErr = np.logspace(RK4Error[0], RK4Error[-1], len(RK4Error))
# logErrors = np.array([logEuErr, logHeErr, logRKErr])
labelsErrors = ['Euler fw.', 'Heun', 'RK-4']
colorsErrors = ['b','r','k']
plt.subplot(1,4,4)
plt.title(titles[3],fontsize = 15)
plt.ylabel(r'$log(\epsilon)$')
plt.xlabel(r'$log(N)$')
convergenceOrder = []
print('Convergence order is: ')
for i in range(3):
plt.scatter(logTestN,logErrors[i], marker = markers[i].replace('-',''), c= colorsErrors[i], label = labelsErrors[i],alpha=0.5)
popt, pcov = optimize.curve_fit(linearFit, logTestN, logErrors[i])
print('\tFor ' + labelsErrors[i] +' Method, it is ' + str(np.abs(np.int(popt[0])))) #Using the slope of the fit
plt.plot(logTestN, linearFit(logTestN, *popt),c= colorsErrors[i])
plt.grid(alpha=0.2)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
tutorials/asr/08_ASR_with_Subword_Tokenization.ipynb | ###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/v1.0.2/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/tokenizers/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/v1.0.2/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/tokenizers/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/v1.0.2/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/tokenizers/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/r1.0.0rc1/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/{BRANCH}/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing it's job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
data_dir = '.'
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Lets look at a few files from this manifest -
###Code
!head -n 5 ./an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with seperate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accomodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create seperate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="./an4/train_manifest.json" \
--data_root="./tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 ./tokenizers/an4/tokenizer_spe_unigram_v128/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of infomation via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = "./tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=['./an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
'./an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
'./an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
'./an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="./an4/train_manifest.json" \
--data_root="./tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir="./tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/v1.0.0b1/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/tokenizers/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/r1.0.0rc1/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/{BRANCH}/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing it's job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Lets look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with seperate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accomodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create seperate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of infomation via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/tokenizers/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/v1.0.2/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____
###Markdown
Automatic Speech Recognition with Subword Tokenization
In the [ASR with NeMo notebook](https://colab.research.google.com/github/NVIDIA/NeMo/blob/v1.0.0/tutorials/asr/01_ASR_with_NeMo.ipynb), we discuss the pipeline necessary for Automatic Speech Recognition (ASR), and then use the NeMo toolkit to construct a functioning speech recognition model.
In this notebook, we take a step further and look into subword tokenization as a useful encoding scheme for ASR models, and why they are necessary. We then construct a custom tokenizer from the dataset, and use it to construct and train an ASR model on the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). Subword Tokenization
We begin with a short intro to what exactly is subword tokenization. If you are familiar with some Natural Language Processing terminologies, then you might have heard of the term "subword" frequently.
So what is a subword in the first place? Simply put, it is either a single character or a group of characters. When combined according to a tokenization-detokenization algorithm, it generates a set of characters, words, or entire sentences.
Many subword tokenization-detokenization algorithms exist, which can be built using large corpora of text data to tokenize and detokenize the data to and from subwords effectively. Some of the most commonly used subword tokenization methods are [Byte Pair Encoding](https://arxiv.org/abs/1508.07909), [Word Piece Encoding](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and [Sentence Piece Encoding](https://www.aclweb.org/anthology/D18-2012/), to name just a few.
------
Here, we will show a short demo on why subword tokenization is necessary for Automatic Speech Recognition under certain situations and its benefits to the model in terms of efficiency and accuracy. We will implement the general steps that a subword tokenization algorithm might perform. Note - this is just a simplified demonstration of the underlying technique.
###Code
TEXT_CORPUS = [
"hello world",
"today is a good day",
]
###Output
_____no_output_____
###Markdown
We first start with a simple character tokenizer
###Code
def char_tokenize(text):
tokens = []
for char in text:
tokens.append(ord(char))
return tokens
def char_detokenize(tokens):
tokens = [chr(t) for t in tokens]
text = "".join(tokens)
return text
###Output
_____no_output_____
###Markdown
Now make sure that character tokenizer is doing its job correctly !
###Code
char_tokens = char_tokenize(TEXT_CORPUS[0])
print("Tokenized tokens :", char_tokens)
text = char_detokenize(char_tokens)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
-----
Great! The character tokenizer did its job correctly - each character is separated as an individual token, and they can be reconstructed into precisely the original text!
Now let's create a simple dictionary-based tokenizer - it will have a select set of subwords that it will use to map tokens back and forth. Note - to simplify the technique's demonstration; we will use a vocabulary with entire words. However, note that this is an uncommon occurrence unless the vocabulary sizes are huge when built on natural text.
###Code
def dict_tokenize(text, vocabulary):
tokens = []
# first do full word searches
split_text = text.split()
for split in split_text:
if split in vocabulary:
tokens.append(vocabulary[split])
else:
chars = list(split)
t_chars = [vocabulary[c] for c in chars]
tokens.extend(t_chars)
tokens.append(vocabulary[" "])
# remove extra space token
tokens.pop(-1)
return tokens
def dict_detokenize(tokens, vocabulary):
text = ""
reverse_vocab = {v: k for k, v in vocabulary.items()}
for token in tokens:
if token in reverse_vocab:
text = text + reverse_vocab[token]
else:
text = text + "".join(token)
return text
###Output
_____no_output_____
###Markdown
First, we need to build a vocabulary for this tokenizer. It will contain all the lower case English characters, space, and a few whole words for simplicity.
###Code
vocabulary = {chr(i + ord("a")) : (i + 1) for i in range(26)}
# add whole words and special tokens
vocabulary[" "] = 0
vocabulary["hello"] = len(vocabulary) + 1
vocabulary["today"] = len(vocabulary) + 1
vocabulary["good"] = len(vocabulary) + 1
print(vocabulary)
dict_tokens = dict_tokenize(TEXT_CORPUS[0], vocabulary)
print("Tokenized tokens :", dict_tokens)
text = dict_detokenize(dict_tokens, vocabulary)
print("Detokenized text :", text)
###Output
_____no_output_____
###Markdown
------
Great! Our dictionary tokenizer works well and tokenizes-detokenizes the data correctly.
You might be wondering - why did we have to go through all this trouble to tokenize and detokenize data if we get back the same thing?
For ASR - the hidden benefit lies in the length of the tokenized representation!
###Code
print("Character tokenization length -", len(char_tokens))
print("Dict tokenization length -", len(dict_tokens))
###Output
_____no_output_____
###Markdown
By having the whole word "hello" in our tokenizer's dictionary, we could reduce the length of the tokenized data by four tokens and still represent the same information!
Actual subword algorithms like the ones discussed above go several steps further - they partition whole words based on occurrence in text and build tokens for them too! So instead of wasting 5 tokens for `["h", "e", "l", "l", "o"]`, we can represent it as `["hel", "lo"]` and then merge the `` tokens together to get back `hello` by using just 2 tokens ! The necessity of subword tokenization
It has been found via extensive research in the domain of Neural Machine Translation and Language Modelling (and its variants), that subword tokenization not only reduces the length of the tokenized representation (thereby making sentences shorter and more manageable for models to learn), but also boosts the accuracy of prediction of correct tokens (refer to the earlier cited papers).
You might remember that earlier; we mentioned subword tokenization as a necessity rather than just a nice-to-have component for ASR. In the previous tutorial, we used the [Connectionist Temporal Classification](https://www.cs.toronto.edu/~graves/icml_2006.pdf) loss function to train the model, but this loss function has a few limitations-
- **Generated tokens are conditionally independent of each other**. In other words - the probability of character "l" being predicted after "hel" is conditionally independent of the previous token - so any other token can also be predicted unless the model has future information!
- **The length of the generated (target) sequence must be shorter than that of the source sequence.**
------
It turns out - subword tokenization helps alleviate both of these issues!
- Sophisticated subword tokenization algorithms build their vocabularies based on large text corpora. To accurately tokenize such large volumes of text with minimal vocabulary size, the subwords that are learned inherently model the interdependency between tokens of that language to some degree.
Looking at the previous example, the token `hel` is a single token that represents the relationship `h` => `e` => `l`. When the model predicts the singe token `hel`, it implicitly predicts this relationship - even though the subsequent token can be either `l` (for `hell`) or `lo` (for `hello`) and is predicted independently of the previous token!
- By reducing the target sentence length by subword tokenization (target sentence here being the characters/subwords transcribed from the audio signal), we entirely sidestep the sequence length limitation of CTC loss!
This means we can perform a larger number of pooling steps in our acoustic models, thereby improving execution speed while simultaneously reducing memory requirements. Building a custom subword tokenizer
After all that talk about subword tokenization, let's finally build a custom tokenizer for our ASR model! While the `AN4` dataset is simple enough to be trained using character-based models, its small size is also perfect for a demonstration on a notebook. Preparing the dataset (AN4)
The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, and their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly.
Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.)
###Code
# This is where the an4/ directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# The directory should exist.
data_dir = "."
import glob
import os
import subprocess
import tarfile
import wget
# Download the dataset. This will take a few moments...
print("******")
if not os.path.exists(data_dir + '/an4_sphere.tar.gz'):
an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz'
an4_path = wget.download(an4_url, data_dir)
print(f"Dataset downloaded at: {an4_path}")
else:
print("Tarfile already exists.")
an4_path = data_dir + '/an4_sphere.tar.gz'
if not os.path.exists(data_dir + '/an4/'):
# Untar and convert .sph to .wav (using sox)
tar = tarfile.open(an4_path)
tar.extractall(path=data_dir)
print("Converting .sph to .wav...")
sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True)
for sph_path in sph_list:
wav_path = sph_path[:-4] + '.wav'
cmd = ["sox", sph_path, wav_path]
subprocess.run(cmd)
print("Finished conversion.\n******")
###Output
_____no_output_____
###Markdown
You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need.
Creating Data Manifests
The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample.
Here's an example of what one line in a NeMo-compatible manifest might look like:
```
{"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"}
```
We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs:
```
...
P I T T S B U R G H (cen5-fash-b)
TWO SIX EIGHT FOUR FOUR ONE EIGHT (cen7-fash-b)
...
```
###Code
# --- Building Manifest Files --- #
import json
import librosa
# Function to build a manifest
def build_manifest(transcripts_path, manifest_path, wav_path):
with open(transcripts_path, 'r') as fin:
with open(manifest_path, 'w') as fout:
for line in fin:
# Lines look like this:
# <s> transcript </s> (fileID)
transcript = line[: line.find('(')-1].lower()
transcript = transcript.replace('<s>', '').replace('</s>', '')
transcript = transcript.strip()
file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b"
audio_path = os.path.join(
data_dir, wav_path,
file_id[file_id.find('-')+1 : file_id.rfind('-')],
file_id + '.wav')
duration = librosa.core.get_duration(filename=audio_path)
# Write the metadata to the manifest
metadata = {
"audio_filepath": audio_path,
"duration": duration,
"text": transcript
}
json.dump(metadata, fout)
fout.write('\n')
# Building Manifests
print("******")
train_transcripts = data_dir + '/an4/etc/an4_train.transcription'
train_manifest = data_dir + '/an4/train_manifest.json'
if not os.path.isfile(train_manifest):
build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk')
print("Training manifest created.")
test_transcripts = data_dir + '/an4/etc/an4_test.transcription'
test_manifest = data_dir + '/an4/test_manifest.json'
if not os.path.isfile(test_manifest):
build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk')
print("Test manifest created.")
print("***Done***")
###Output
_____no_output_____
###Markdown
Let's look at a few files from this manifest -
###Code
!head -n 5 {data_dir}/an4/train_manifest.json
###Output
_____no_output_____
###Markdown
Build a custom tokenizer
Next, we will use a NeMo script to easily build a tokenizer for the above dataset. The script takes a few arguments, which will be explained in detail.
First, download the tokenizer creation script from the nemo repository.
###Code
if not os.path.exists("scripts/tokenizers/process_asr_text_tokenizer.py"):
!mkdir scripts
!wget -P scripts/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/tokenizers/process_asr_text_tokenizer.py
###Output
_____no_output_____
###Markdown
The script above takes a few important arguments -
- either `--manifest` or `--data_file`: If your text data lies inside of an ASR manifest file, then use the `--manifest` path. If instead the text data is inside a file with separate lines corresponding to different text lines, then use `--data_file`. In either case, you can add commas to concatenate different manifests or different data files.
- `--data_root`: The output directory (whose subdirectories will be created if not present) where the tokenizers will be placed.
- `--vocab_size`: The size of the tokenizer vocabulary. Larger vocabularies can accommodate almost entire words, but the decoder size of any model will grow proportionally.
- `--tokenizer`: Can be either `spe` or `wpe` . `spe` refers to the Google `sentencepiece` library tokenizer. `wpe` refers to the HuggingFace BERT Word Piece tokenizer. Please refer to the papers above for the relevant technique in order to select an appropriate tokenizer.
- `--no_lower_case`: When this flag is passed, it will force the tokenizer to create separate tokens for upper and lower case characters. By default, the script will turn all the text to lower case before tokenization (and if upper case characters are passed during training/inference, the tokenizer will emit a token equivalent to Out-Of-Vocabulary). Used primarily for the English language.
- `--spe_type`: The `sentencepiece` library has a few implementations of the tokenization technique, and `spe_type` refers to these implementations. Currently supported types are `unigram`, `bpe`, `char`, `word`. Defaults to `bpe`.
- `--spe_character_coverage`: The `sentencepiece` library considers how much of the original vocabulary it should cover in its "base set" of tokens (akin to the lower and upper case characters of the English language). For almost all languages with small base token sets `(<1000 tokens)`, this should be kept at its default of 1.0. For languages with larger vocabularies (say Japanese, Mandarin, Korean etc), the suggested value is 0.9995.
- `--spe_sample_size`: If the dataset is too large, consider using a sampled dataset indicated by a positive integer. By default, any negative value (default = -1) will use the entire dataset.
- `--spe_train_extremely_large_corpus`: When training a sentencepiece tokenizer on very large amounts of text, sometimes the tokenizer will run out of memory or wont be able to process so much data on RAM. At some point you might receive the following error - "Input corpus too large, try with train_extremely_large_corpus=true". If your machine has large amounts of RAM, it might still be possible to build the tokenizer using the above flag. Will silently fail if it runs out of RAM.
- `--log`: Whether the script should display log messages
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=32 \
--tokenizer="spe" \
--no_lower_case \
--spe_type="unigram" \
--log
###Output
_____no_output_____
###Markdown
-----
That's it! Our tokenizer is now built and stored inside the `data_root` directory that we provided to the script.
First we start by inspecting the tokenizer vocabulary itself. To keep it manageable, we will print just the first 10 tokens of the vocabulary:
###Code
!head -n 10 {data_dir}/tokenizers/an4/tokenizer_spe_unigram_v32/vocab.txt
###Output
_____no_output_____
###Markdown
Training an ASR Model with subword tokenization
Now that our tokenizer is built, let's begin constructing an ASR model that will use this tokenizer for its dataset pre-processing and post-processing steps.
We will use a Citrinet model to demonstrate the usage of subword tokenization models for training and inference. Citrinet is a [QuartzNet-like architecture](https://arxiv.org/abs/1910.10261), but it uses subword-tokenization along with 8x subsampling and [Squeeze-and-Excitation](https://arxiv.org/abs/1709.01507) to achieve strong accuracy in transcriptions while still using non-autoregressive decoding for efficient inference.
We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/v1.0.0/).
NeMo let us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Training from scratch
To train from scratch, you need to prepare your training data in the right format and specify your models architecture. Specifying Our Model with a YAML Config File
We'll build a *Citrinet* model for this tutorial and use *greedy CTC decoder*, using the configuration found in `./configs/citrinet_bpe.yaml`.
If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this:
```
- filters: 192
repeat: 5
kernel: [11]
stride: [1]
dilation: [1]
dropout: 0.0
residual: false
separable: true
se: true
se_context_size: -1
```
The first member of the list corresponds to the first block in the QuartzNet/Citrinet architecture diagram.
Some entries at the top of the file specify how we will handle training (`train_ds`) and validation (`validation_ds`) data.
Using a YAML config such as this helps get a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code.
###Code
from omegaconf import OmegaConf, open_dict
params = OmegaConf.load("./configs/config_bpe.yaml")
###Output
_____no_output_____
###Markdown
Let us make the network smaller since `AN4` is a particularly small dataset and does not need the capacity of the general config.
###Code
print(OmegaConf.to_yaml(params))
###Output
_____no_output_____
###Markdown
Specifying the tokenizer to the model
Now that we have a model config, we are almost ready to train it ! We just have to inform it where the tokenizer directory exists and it will do the rest for us !
We have to provide just two pieces of information via the config:
- `tokenizer.dir`: The directory where the tokenizer files are stored
- `tokenizer.type`: Can be `bpe` (for `sentencepiece` based tokenizers) or `wpe` (for HuggingFace based BERT Word Piece Tokenizers. Represents what type of tokenizer is being supplied and parse its directory to construct the actual tokenizer.
**Note**: We only have to provide the **directory** where the tokenizer file exists along with its vocabulary and any other essential components. We pass the directory instead of an explicit vocabulary path, since not all libraries construct their tokenizer in the same manner, so the model will figure out how it should prepare the tokenizer.
###Code
params.model.tokenizer.dir = data_dir + "/tokenizers/an4/tokenizer_spe_unigram_v32/" # note this is a directory, not a path to a vocabulary file
params.model.tokenizer.type = "bpe"
###Output
_____no_output_____
###Markdown
Training with PyTorch Lightning
NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected.
However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs
###Code
import pytorch_lightning as pl
trainer = pl.Trainer(gpus=1, max_epochs=50)
###Output
_____no_output_____
###Markdown
Next, we instantiate and ASR model based on our ``citrinet_bpe.yaml`` file from the previous section.
Note that this is a stage during which we also tell the model where our training and validation manifests are.
###Code
# Update paths to dataset
params.model.train_ds.manifest_filepath = train_manifest
params.model.validation_ds.manifest_filepath = test_manifest
# remove spec augment for this dataset
params.model.spec_augment.rect_masks = 0
###Output
_____no_output_____
###Markdown
Note the subtle difference in the model that we instantiate - `EncDecCTCModelBPE` instead of `EncDecCTCModel`.
`EncDecCTCModelBPE` is nearly identical to `EncDecCTCModel` (it is in fact a subclass!) that simply adds support for subword tokenization.
###Code
first_asr_model = nemo_asr.models.EncDecCTCModelBPE(cfg=params.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training: Monitoring Progress
We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
With that, we can start training with just one line!
###Code
# Start training!!!
trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Save the model easily along with the tokenizer using `save_to`.
Later, we use `restore_from` to restore the model, it will also reinitialize the tokenizer !
###Code
first_asr_model.save_to("first_model.nemo")
!ls -l -- *.nemo
###Output
_____no_output_____
###Markdown
There we go! We've put together a full training pipeline for the model and trained it for 50 epochs.
If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to()`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from()`. We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following:
###Code
print(params.model.optim)
###Output
_____no_output_____
###Markdown
After training and hyper parameter tuning
Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `.setup_optimization()` with the new optimization parameters.
###Code
import copy
new_opt = copy.deepcopy(params.model.optim)
new_opt.lr = 0.1
first_asr_model.setup_optimization(optim_config=new_opt);
# And then you can invoke trainer.fit(first_asr_model)
###Output
_____no_output_____
###Markdown
Inference
Let's have a quick look at how one could run inference with NeMo's ASR model.
First, ``EncDecCTCModelBPE`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance.
###Code
print(first_asr_model.transcribe(paths2audio_files=[data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav',
data_dir + '/an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'],
batch_size=4))
###Output
_____no_output_____
###Markdown
Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references.
###Code
# Bigger batch-size = bigger throughput
params['model']['validation_ds']['batch_size'] = 16
# Setup the test data loader and make sure the model is on GPU
first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds'])
first_asr_model.cuda()
first_asr_model.eval()
# We remove some preprocessing artifacts which benefit training
first_asr_model.preprocessor.featurizer.pad_to = 0
first_asr_model.preprocessor.featurizer.dither = 0.0
# We will be computing Word Error Rate (WER) metric between our hypothesis and predictions.
# WER is computed as numerator/denominator.
# We'll gather all the test batches' numerators and denominators.
wer_nums = []
wer_denoms = []
# Loop over all test batches.
# Iterating over the model's `test_dataloader` will give us:
# (audio_signal, audio_signal_length, transcript_tokens, transcript_length)
# See the AudioToCharDataset for more details.
for test_batch in first_asr_model.test_dataloader():
test_batch = [x.cuda() for x in test_batch]
targets = test_batch[2]
targets_lengths = test_batch[3]
log_probs, encoded_len, greedy_predictions = first_asr_model(
input_signal=test_batch[0], input_signal_length=test_batch[1]
)
# Notice the model has a helper object to compute WER
first_asr_model._wer.update(greedy_predictions, targets, targets_lengths)
_, wer_num, wer_denom = first_asr_model._wer.compute()
wer_nums.append(wer_num.detach().cpu().numpy())
wer_denoms.append(wer_denom.detach().cpu().numpy())
# We need to sum all numerators and denominators first. Then divide.
print(f"WER = {sum(wer_nums)/sum(wer_denoms)}")
###Output
_____no_output_____
###Markdown
This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Utilizing the underlying tokenizer
Since the model has an underlying tokenizer, it would be nice to use it externally as well - say for getting the subwords of the transcript or to tokenize a dataset using the same tokenizer as the ASR model.
###Code
tokenizer = first_asr_model.tokenizer
tokenizer
###Output
_____no_output_____
###Markdown
You can get the tokenizer's vocabulary using the `tokenizer.tokenizer.get_vocab()` method.
ASR tokenizers will map the subword to an integer index in the vocabulary for convenience.
###Code
vocab = tokenizer.tokenizer.get_vocab()
vocab
###Output
_____no_output_____
###Markdown
You can also tokenize and detokenize some text using this tokenizer, with the same API across all of NeMo.
###Code
tokens = tokenizer.text_to_tokens("hello world")
tokens
token_ids = tokenizer.text_to_ids("hello world")
token_ids
subwords = tokenizer.ids_to_tokens(token_ids)
subwords
text = tokenizer.ids_to_text(token_ids)
text
###Output
_____no_output_____
###Markdown
Model Improvements
You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model.
Data Augmentation
There exist several ASR data augmentation methods that can increase the size of our training set.
For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.)
Our toy model disables spectrogram augmentation, because it is not significantly beneficial for the short demo.
###Code
print(OmegaConf.to_yaml(first_asr_model._cfg['spec_augment']))
###Output
_____no_output_____
###Markdown
If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. Transfer learning
Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce.
In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples.
Transfer learning with NeMo is simple. Let's demonstrate how we could fine-tune the model we trained earlier on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change the model's vocabulary to demonstrate how it's done. -----
First, let's create another tokenizer - perhaps using a larger vocabulary size than the small tokenizer we created earlier. Also we swap out `sentencepiece` for `BERT Word Piece` tokenizer.
###Code
!python ./scripts/process_asr_text_tokenizer.py \
--manifest="{data_dir}/an4/train_manifest.json" \
--data_root="{data_dir}/tokenizers/an4/" \
--vocab_size=64 \
--tokenizer="wpe" \
--no_lower_case \
--log
###Output
_____no_output_____
###Markdown
Now let's load the previously trained model so that we can fine tune it-
###Code
restored_model = nemo_asr.models.EncDecCTCModelBPE.restore_from("./first_model.nemo")
###Output
_____no_output_____
###Markdown
Now let's update the vocabulary in this model
###Code
# Check what kind of vocabulary/alphabet the model has right now
print(restored_model.decoder.vocabulary)
# Lets change the tokenizer vocabulary by passing the path to the new directory,
# and also change the type
restored_model.change_vocabulary(
new_tokenizer_dir=data_dir + "/tokenizers/an4/tokenizer_wpe_v64/",
new_tokenizer_type="wpe"
)
###Output
_____no_output_____
###Markdown
After this, our decoder has completely changed, but our encoder (where most of the weights are) remained intact. Let's fine tune-this model for 20 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`.
**Note**: For this demonstration, we will also freeze the encoder to speed up finetuning (since both tokenizers are built on the same train set), but in general it should not be done for proper training on a new language (or on a different corpus than the original train corpus).
###Code
# Use the smaller learning rate we set before
restored_model.setup_optimization(optim_config=new_opt)
# Point to the data we'll use for fine-tuning as the training set
restored_model.setup_training_data(train_data_config=params['model']['train_ds'])
# Point to the new validation data for fine-tuning
restored_model.setup_validation_data(val_data_config=params['model']['validation_ds'])
# Freeze the encoder layers (should not be done for finetuning, only done for demo)
restored_model.encoder.freeze()
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# And now we can create a PyTorch Lightning trainer and call `fit` again.
trainer = pl.Trainer(gpus=1, max_epochs=20)
trainer.fit(restored_model)
###Output
_____no_output_____ |
nbs/3.2_mining.unsupervised.eda.traceability.d2v.ipynb | ###Markdown
Experimenting Neural Unsupervised Approaches for Software Information Retrieval [d2v]> This module is dedicated to evaluate doc2vec. Consider to Copy the entire notebook for a new and separeted empirical evaluation. > Implementing mutual information analysis> Author: @danaderp April 2020> Author: @danielrc Nov 2020
###Code
#!pip install gensim
#!pip install seaborn
#!pip install sklearn
!pip install -e .
###Output
[31mERROR: File "setup.py" not found. Directory cannot be installed in editable mode: /tf/main/nbs[0m
[33mWARNING: You are using pip version 19.2.3, however version 20.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
This copy is for Cisco purposes. It was adapted to process private github data from cisco.
###Code
import numpy as np
import gensim
import pandas as pd
from itertools import product
from random import sample
import functools
import os
#export
from datetime import datetime
import seaborn as sns
#export
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
#export
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import plot_precision_recall_curve
from sklearn.metrics import auc
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
from pandas.plotting import lag_plot
import math as m
import random as r
import collections
from sklearn.metrics.pairwise import cosine_similarity
#export
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
from gensim import corpora
#https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cosine.html
#export
from scipy.spatial import distance
from scipy.stats import pearsonr
#export
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from pathlib import Path
import ds4se as ds
from ds4se.mgmnt.prep.conv import *
###Output
_____no_output_____
###Markdown
Experients Set-up
###Code
path_data = '../dvc-ds4se/' #dataset path
path_to_trained_model = path_data+'models/wv/bpe128k/[word2vec-Java-Py-SK-500-20E-128k-1594873397.267055].model'
#CISCO GitHub Parameters
def sacp_params():
return {
"vectorizationType": VectorizationType.word2vec,
"linkType": LinkType.issue2src,
"system": 'sacp-python-common',
"path_to_trained_model": path_data + 'models/wv/conv/[word2vec-Py-Java-Wiki-SK-500-20E[0]-1592979270.711115].model',
"source_type": SoftwareArtifacts.PR,
"target_type": SoftwareArtifacts.PY,
"path_mappings": '/tf/data/cisco/sacp_data/sacp-pr-mappings.csv',
"system_path_config": {
"system_path": '/tf/data/cisco/sacp_data/[sacp-python-common-all-corpus-1596383717.992744].csv', #MUST have bpe8k <----
"sep": '~',
"names": ['ids','conv'],
"prep": Preprocessing.conv
},
"saving_path": path_data/'se-benchmarking/traceability/cisco/sacp',
"names": ['Source','Target','Linked?']
}
path_to_trained_model = path_data + 'models/wv/bpe8k/[word2vec-Java-Py-Wiki-SK-500-20E-8k[12]-1594546477.788739].model'
def sacp_params_bpe():
return {
"vectorizationType": VectorizationType.word2vec,
"linkType": LinkType.issue2src,
"system": 'sacp-python-common',
"path_to_trained_model": path_to_trained_model,
"source_type": SoftwareArtifacts.PR,
"target_type": SoftwareArtifacts.PY,
"path_mappings": '/tf/data/cisco/sacp_data/sacp-pr-mappings.csv',
"system_path_config": {
"system_path": '/tf/data/cisco/sacp_data/[sacp-python-common-all-corpus-1596383717.992744].csv',
"sep": '~',
"names": ['ids','bpe8k'],
"prep": Preprocessing.bpe
},
"saving_path": path_data + 'se-benchmarking/traceability/cisco/sacp',
"names": ['Source','Target','Linked?'],
"model_prefix":path_data + 'models/bpe/sentencepiece/wiki_py_java_bpe_8k' #For BPE Analysis
}
parameters = sacp_params_bpe()
#parameters = sacp_params()
parameters
###Output
_____no_output_____
###Markdown
Testing experiments set-up
###Code
#tst
parameters['system_path_config']['system_path']
#tst
parameters['system_path_config']['names'][1]
parameters['system_path_config']['sep'] #tst
#tst
df_all_system = pd.read_csv(
parameters['system_path_config']['system_path'],
#names = params['system_path_config']['names'], #include the names into the files!!!
header = 0,
index_col = 0,
sep = parameters['system_path_config']['sep']
)
df_all_system.head(1)
#tst
tag = parameters['system_path_config']['names'][1]
[doc.split() for doc in df_all_system[df_all_system[tag].notnull()][tag].values]
len(df_all_system[tag].values) #tst
#tst
len(df_all_system[df_all_system[tag].notnull()]) #some files are _init_ thefore are empty
#tst
df_all_system[df_all_system[tag].notnull()][tag].values
#tst
df_all_system.loc[df_all_system['type'] == parameters['source_type']][parameters['system_path_config']['names']]
df_all_system.loc[df_all_system['type'] == parameters['target_type']][parameters['system_path_config']['names']]
###Output
_____no_output_____
###Markdown
Defining BasicSequenceVectorization
###Code
#tst
print(list(VectorizationType), list(DistanceMetric), list(SimilarityMetric), list(LinkType))
#export
class BasicSequenceVectorization():
'''Implementation of the class sequence-vanilla-vectorization other classes can inheritance this one'''
def __init__(self, params):
self.params = params
self.df_nonground_link = None
self.df_ground_link = None
self.prep = ConventionalPreprocessing(params, bpe = True)
self.df_all_system = pd.read_csv(
params['system_path_config']['system_path'],
#names = params['system_path_config']['names'], #include the names into the files!!!
header = 0,
index_col = 0,
sep = params['system_path_config']['sep']
)
#self.df_source = pd.read_csv(params['source_path'], names=['ids', 'text'], header=None, sep=' ')
#self.df_target = pd.read_csv(params['target_path'], names=['ids', 'text'], header=None, sep=' ')
self.df_source = self.df_all_system.loc[self.df_all_system['type'] == params['source_type']][params['system_path_config']['names']]
self.df_target = self.df_all_system.loc[self.df_all_system['type'] == params['target_type']][params['system_path_config']['names']]
#NA verification
tag = parameters['system_path_config']['names'][1]
self.df_source[tag] = self.df_source[tag].fillna("")
self.df_target[tag] = self.df_target[tag].fillna("")
if params['system_path_config']['prep'] == Preprocessing.conv: #if conventional preprocessing
self.documents = [doc.split() for doc in self.df_all_system[self.df_all_system[tag].notnull()][tag].values] #Preparing Corpus
self.dictionary = corpora.Dictionary( self.documents ) #Preparing Dictionary
logging.info("conventional preprocessing documents and dictionary")
elif params['system_path_config']['prep'] == Preprocessing.bpe:
self.documents = [eval(doc) for doc in self.df_all_system[tag].values] #Preparing Corpus
self.dictionary = corpora.Dictionary( self.documents ) #Preparing Dictionary
logging.info("bpe preprocessing documents and dictionary")
####INFO science params
abstracted_vocab = [ set(doc) for doc in self.df_all_system[ 'bpe8k' ].values] #creation of sets
abstracted_vocab = functools.reduce( lambda a,b : a.union(b), abstracted_vocab ) #union of sets
self.vocab = {self.prep.sp_bpe.id_to_piece(id): 0 for id in range(self.prep.sp_bpe.get_piece_size())}
dict_abs_vocab = { elem : 0 for elem in abstracted_vocab - set(self.vocab.keys()) } #Ignored vocab by BPE
self.vocab.update(dict_abs_vocab) #Updating
#This can be extended for future metrics <---------------------
#TODO include mutual and join information
self.dict_labels = {
DistanceMetric.COS:[DistanceMetric.COS, SimilarityMetric.COS_sim],
SimilarityMetric.Pearson:[SimilarityMetric.Pearson],
DistanceMetric.EUC:[DistanceMetric.EUC, SimilarityMetric.EUC_sim],
DistanceMetric.WMD:[DistanceMetric.WMD, SimilarityMetric.WMD_sim],
DistanceMetric.SCM:[DistanceMetric.SCM, SimilarityMetric.SCM_sim],
DistanceMetric.MAN:[DistanceMetric.MAN, SimilarityMetric.MAN_sim],
EntropyMetric.MSI_I:[EntropyMetric.MSI_I, EntropyMetric.MSI_X],
EntropyMetric.MI:[EntropyMetric.JI, EntropyMetric.MI]
}
def ground_truth_processing(self, path_to_ground_truth = '', from_mappings = False):
'Optional class when corpus has ground truth. This function create tuples of links'
if from_mappings:
df_mapping = pd.read_csv(self.params['path_mappings'], header = 0, sep = ',')
ground_links = list(zip(df_mapping['id_pr'].astype(str), df_mapping['doc_id']))
else:
ground_truth = open(path_to_ground_truth,'r')
#Organizing The Ground Truth under the given format
ground_links = [ [(line.strip().split()[0], elem) for elem in line.strip().split()[1:]] for line in ground_truth]
ground_links = functools.reduce(lambda a,b : a+b,ground_links) #reducing into one list
assert len(ground_links) == len(set(ground_links)) #To Verify Redundancies in the file
return ground_links
def samplingLinks(self, sampling = False, samples = 10, basename = False):
if basename:
source = [os.path.basename(elem) for elem in self.df_source['ids'].values ]
target = [os.path.basename(elem) for elem in self.df_target['ids'].values ]
else:
source = self.df_source['ids'].values
target = self.df_target['ids'].values
if sampling:
links = sample( list( product( source , target ) ), samples)
else:
links = list( product( source , target ))
return links
def cos_scipy(self, vector_v, vector_w):
cos = distance.cosine( vector_v, vector_w )
return [cos, 1.-cos]
def euclidean_scipy(self, vector_v, vector_w):
dst = distance.euclidean(vector_v,vector_w)
return [dst, 1./(1.+dst)] #Computing the inverse for similarity
def manhattan_scipy(self, vector_v, vector_w):
dst = distance.cityblock(vector_v,vector_w)
n = len(vector_v)
return [dst, 1./(1.+dst)] #Computing the inverse for similarity
def pearson_abs_scipy(self, vector_v, vector_w):
'''We are not sure that pearson correlation works well on doc2vec inference vectors'''
#vector_v = np.asarray(vector_v, dtype=np.float32)
#vector_w = np.asarray(vector_w, dtype=np.float32)
logging.info("pearson_abs_scipy" + str(vector_v) + "__" + str(vector_w))
corr, _ = pearsonr(vector_v, vector_w)
return [abs(corr)] #Absolute value of the correlation
def computeDistanceMetric(self, links, metric_list):
'''Metric List Iteration'''
metric_labels = [ self.dict_labels[metric] for metric in metric_list] #tracking of the labels
distSim = [[link[0], link[1], self.distance( metric_list, link )] for link in links] #Return the link with metrics
distSim = [[elem[0], elem[1]] + elem[2] for elem in distSim] #Return the link with metrics
return distSim, functools.reduce(lambda a,b : a+b, metric_labels)
def ComputeDistanceArtifacts(self, metric_list, sampling = False , samples = 10, basename = False):
'''Activates Distance and Similarity Computations
@metric_list if [] then Computes All metrics
@sampling is False by the default
@samples is the number of samples (or links) to be generated'''
links_ = self.samplingLinks( sampling, samples, basename )
docs, metric_labels = self.computeDistanceMetric( metric_list=metric_list, links=links_) #checkpoints
self.df_nonground_link = pd.DataFrame(docs, columns =[self.params['names'][0], self.params['names'][1]]+ metric_labels) #Transforming into a Pandas
logging.info("Non-groundtruth links computed")
pass
def SaveLinks(self, grtruth=False, sep=' ', mode='a'):
timestamp = datetime.timestamp(datetime.now())
path_to_link = self.params['saving_path'] + '['+ self.params['system'] + '-' + str(self.params['vectorizationType']) + '-' + str(self.params['linkType']) + '-' + str(grtruth) + '-{}].csv'.format(timestamp)
if grtruth:
self.df_ground_link.to_csv(path_to_link, header=True, index=True, sep=sep, mode=mode)
else:
self.df_nonground_link.to_csv(path_to_link, header=True, index=True, sep=sep, mode=mode)
logging.info('Saving in...' + path_to_link)
pass
def findDistInDF(self, g_tuple, from_mappings=False, semeru_format=False):
'''Return the index values of the matched mappings
.eq is used for Source since it must match the exact code to avoid number substrings
for the target, the substring might works fine'''
if from_mappings:
dist = self.df_ground_link.loc[(self.df_ground_link["Source"].eq(g_tuple[0]) ) &
(self.df_ground_link["Target"].str.contains(g_tuple[1], regex=False))]
logging.info('findDistInDF: from_mappings')
elif semeru_format:
dist = self.df_ground_link.loc[(self.df_ground_link["Source"].str.contains(g_tuple[0], regex=False) ) &
(self.df_ground_link["Target"].str.contains(g_tuple[1], regex=False))]
logging.info('findDistInDF: semeru_format')
else:
dist = self.df_ground_link[self.df_ground_link[self.params['names'][0]].str.contains( g_tuple[0][:g_tuple[0].find('.')] + '-' )
& self.df_ground_link[self.params['names'][1]].str.contains(g_tuple[1][:g_tuple[1].find('.')]) ]
logging.info('findDistInDF: default')
return dist.index.values
def MatchWithGroundTruth(self, path_to_ground_truth='', from_mappings=False, semeru_format=False ):
self.df_ground_link = self.df_nonground_link.copy()
self.df_ground_link[self.params['names'][2]] = 0
matchGT = [ self.findDistInDF( g , from_mappings=from_mappings, semeru_format=semeru_format ) for g in self.ground_truth_processing(path_to_ground_truth,from_mappings)]
matchGT = functools.reduce(lambda a,b : np.concatenate([a,b]), matchGT) #Concatenate indexes
new_column = pd.Series(np.full([len(matchGT)], 1 ), name=self.params['names'][2], index = matchGT)
self.df_ground_link.update(new_column)
logging.info("Groundtruth links computed")
pass
###Output
_____no_output_____
###Markdown
Testing BasicSequenceVectorization
###Code
general2vec = BasicSequenceVectorization(params = parameters)
general2vec.vocab
general2vec.documents
general2vec.dictionary
general2vec.df_all_system.head(1)
general2vec.df_all_system.shape #data final tensor
#tst for libest
path_to_ground_truth = '/tf/main/benchmarking/traceability/testbeds/groundtruth/english/[libest-ground-req-to-tc].txt'
general2vec.ground_truth_processing(path_to_ground_truth)
#tst for sacp
general2vec.ground_truth_processing(parameters['path_mappings'], from_mappings = True)
import math
import dit
###Output
_____no_output_____
###Markdown
Artifacts Similarity with Doc2Vec Try to reproduce the same empirical evaluation like here: [link](https://arxiv.org/pdf/1507.07998.pdf). Pay attention to:- Accuracy vs. Dimensionality (we can replace accuracy for false positive rate or true positive rate)- Visualize paragraph vectors using t-sne- Computing Cosine Distance and Similarity. More about similarity [link](https://www.kdnuggets.com/2017/08/comparing-distance-measurements-python-scipy.html)
###Code
#path_to_trained_model": 'test_data/models/pv/conv/[doc2vec-Py-Java-PVDBOW-500-20E-1592609630.689167].model',
#"path_to_trained_model": 'test_data/models/pv/conv/[doc2vec-Py-Java-Wiki-PVDBOW-500-20E[15]-1592941134.367976].model',
path_to_trained_model = 'test_data/models/[doc2vec-Py-Java-PVDBOW-500-20E-8k-1594572857.17191].model'
def doc2vec_params():
return {
"vectorizationType": VectorizationType.doc2vec,
"linkType": LinkType.req2tc,
"system": 'libest',
"path_to_trained_model": path_to_trained_model,
"source_path": '/tf/main/benchmarking/traceability/testbeds/nltk/[libest-pre-req].csv',
"target_path": '/tf/main/benchmarking/traceability/testbeds/nltk/[libest-pre-tc].csv',
"system_path": '/tf/main/benchmarking/traceability/testbeds/nltk/[libest-pre-all].csv',
"saving_path": 'test_data/',
"names": ['Source','Target','Linked?']
}
doc2vec_params = doc2vec_params()
doc2vec_params
#Export
class Doc2VecSeqVect(BasicSequenceVectorization):
def __init__(self, params):
super().__init__(params)
self.new_model = gensim.models.Doc2Vec.load( params['path_to_trained_model'] )
self.new_model.init_sims(replace=True) # Normalizes the vectors in the word2vec class.
self.df_inferred_src = None
self.df_inferred_trg = None
self.dict_distance_dispatcher = {
DistanceMetric.COS: self.cos_scipy,
SimilarityMetric.Pearson: self.pearson_abs_scipy,
DistanceMetric.EUC: self.euclidean_scipy,
DistanceMetric.MAN: self.manhattan_scipy
}
def distance(self, metric_list, link):
'''Iterate on the metrics'''
ν_inferredSource = list(self.df_inferred_src[self.df_inferred_src['ids'].str.contains(link[0])]['inf-doc2vec'])
w_inferredTarget = list(self.df_inferred_trg[self.df_inferred_trg['ids'].str.contains(link[1])]['inf-doc2vec'])
dist = [ self.dict_distance_dispatcher[metric](ν_inferredSource,w_inferredTarget) for metric in metric_list]
logging.info("Computed distances or similarities "+ str(link) + str(dist))
return functools.reduce(lambda a,b : a+b, dist) #Always return a list
def computeDistanceMetric(self, links, metric_list):
'''It is computed the cosine similarity'''
metric_labels = [ self.dict_labels[metric] for metric in metric_list] #tracking of the labels
distSim = [[link[0], link[1], self.distance( metric_list, link )] for link in links] #Return the link with metrics
distSim = [[elem[0], elem[1]] + elem[2] for elem in distSim] #Return the link with metrics
return distSim, functools.reduce(lambda a,b : a+b, metric_labels)
def InferDoc2Vec(self, steps=200):
'''Activate Inference on Target and Source Corpus'''
self.df_inferred_src = self.df_source.copy()
self.df_inferred_trg = self.df_target.copy()
self.df_inferred_src['inf-doc2vec'] = [self.new_model.infer_vector(artifact.split(),steps=steps) for artifact in self.df_inferred_src['text'].values]
self.df_inferred_trg['inf-doc2vec'] = [self.new_model.infer_vector(artifact.split(),steps=steps) for artifact in self.df_inferred_trg['text'].values]
logging.info("Infer Doc2Vec on Source and Target Complete")
###Output
_____no_output_____
###Markdown
Testing Doc2Vec SequenceVectorization
###Code
doc2vec = Doc2VecSeqVect(params = doc2vec_params)
#[step1]Apply Doc2Vec Inference
doc2vec.InferDoc2Vec(steps=200)
doc2vec.df_inferred_src.head(2)
#test_inferDoc2Vec_trg = inferDoc2Vec(df_target)
#test_inferDoc2Vec_trg.head()
doc2vec.df_inferred_trg.head(2)
pearsonr(doc2vec.df_inferred_trg['inf-doc2vec'][0], doc2vec.df_inferred_trg['inf-doc2vec'][0])
#[step 2]NonGroundTruth Computation
metric_l = [DistanceMetric.EUC,DistanceMetric.COS,DistanceMetric.MAN]# , SimilarityMetric.Pearson]
doc2vec.ComputeDistanceArtifacts( sampling=False, samples = 50, metric_list = metric_l )
doc2vec.df_nonground_link.head()
#[step 3]Saving Non-GroundTruth Links
doc2vec.SaveLinks()
#Loading Non-GroundTruth Links (change the timestamp with the assigned in the previous step)
df_nonglinks_doc2vec = LoadLinks(timestamp=1594653325.258415, params=doc2vec_params)
df_nonglinks_doc2vec.head()
#[step 4]GroundTruthMatching Testing
path_to_ground_truth = '/tf/main/benchmarking/traceability/testbeds/groundtruth/english/[libest-ground-req-to-tc].txt'
doc2vec.MatchWithGroundTruth(path_to_ground_truth)
doc2vec.df_ground_link
#[step 5]Saving GroundTruth Links
doc2vec.SaveLinks(grtruth = True)
#Loading Non-GroundTruth Links (change the timestamp with the assigned in the previous step)
df_glinks_doc2vec = LoadLinks(timestamp=1594653350.19946, params=doc2vec_params, grtruth = True)
df_glinks_doc2vec.head()
###Output
_____no_output_____
###Markdown
Approach Evaluation and Interpretation (doc2vec)
###Code
#supervisedEvalDoc2vec = SupervisedVectorEvaluation(doc2vec, similarity=SimilarityMetric.EUC_sim)
#supervisedEvalDoc2vec = SupervisedVectorEvaluation(doc2vec, similarity=SimilarityMetric.COS_sim)
supervisedEvalDoc2vec = SupervisedVectorEvaluation(doc2vec, similarity=SimilarityMetric.MAN_sim)
supervisedEvalDoc2vec.y_test
supervisedEvalDoc2vec.y_score
supervisedEvalDoc2vec.Compute_precision_recall_gain()
supervisedEvalDoc2vec.Compute_avg_precision()
supervisedEvalDoc2vec.Compute_roc_curve()
###Output
_____no_output_____
###Markdown
Compute distribution of similarities doc2vec
###Code
#Basic Statistics
filter_doc2vec = doc2vec.df_ground_link
filter_doc2vec.describe()
lag_plot(filter_doc2vec[[SimilarityMetric.EUC_sim]])
lag_plot(filter_doc2vec[DistanceMetric.EUC])
filter_doc2vec.hist(column=[SimilarityMetric.EUC_sim,DistanceMetric.EUC],color='k',bins=50,figsize=[10,5],alpha=0.5)
#Separate distance from similarity analysis here
errors = filter_doc2vec[[SimilarityMetric.EUC_sim,DistanceMetric.EUC]].std()
print(errors)
filter_doc2vec[[SimilarityMetric.EUC_sim,DistanceMetric.EUC]].plot.kde()
filter_doc2vec.hist(by='Linked?',column=SimilarityMetric.EUC_sim,figsize=[10, 5],bins=80)
filter_doc2vec.hist(by='Linked?',column=DistanceMetric.EUC,figsize=[10, 5],bins=80)
#separate the distance from the similarity plot
boxplot = filter_doc2vec.boxplot(by='Linked?',column=[SimilarityMetric.EUC_sim,DistanceMetric.EUC],figsize=[10, 5])
boxplot = filter_doc2vec.boxplot(by='Linked?',column=[SimilarityMetric.EUC_sim],figsize=[10, 5])
###Output
_____no_output_____
###Markdown
Combining Doc2vec and Word2vecPlease check this post for futher detatils [link](https://stats.stackexchange.com/questions/217614/intepreting-doc2vec-cosine-similarity-between-doc-vectors-and-word-vectors)
###Code
! nbdev_build_docs #<-------- [Activate when stable]
! nbdev_build_lib
from nbdev.export import notebook2script
notebook2script()
#! pip install -e .
###Output
_____no_output_____ |
dog-breed-identification/1. Preprocess-GroupImages.ipynb | ###Markdown
1. Preprocess-GroupImages Import pkgs
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
from IPython.display import display
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import os
import zipfile
import pickle
from PIL import Image
from shutil import copy2
###Output
_____no_output_____
###Markdown
Unzip files
###Code
# def Unzip(data_path, zip_name):
# extract_name = zip_name[0:-4]
# extract_path = os.path.join(data_path, extract_name)
# zip_path = os.path.join(data_path, zip_name)
# if not (os.path.isdir(extract_path) or os.path.isfile(extract_path)):
# with zipfile.ZipFile(zip_path) as file:
# for name in file.namelist():
# file.extract(name, data_path)
cwd = os.getcwd()
data_path = os.path.join(cwd, 'input')
# Unzip(data_path, os.path.join(data_path, 'labels.csv.zip'))
# Unzip(data_path, os.path.join(data_path, 'sample_submission.csv.zip'))
# Unzip(data_path, os.path.join(data_path, 'test.zip'))
# Unzip(data_path, os.path.join(data_path, 'train.zip'))
###Output
_____no_output_____
###Markdown
Group train data by class**Note: We create folder structure for train_datagen.flow_from_directory(...).**
###Code
labels_path = os.path.join(data_path, 'labels.csv')
labels = pd.read_csv(labels_path)
print('labels.shape is {0}.'.format(labels.shape))
display(labels.head(2))
label_classes = labels.iloc[:,1].unique()
label_classes = sorted(label_classes)
display('The breeds of dogs is {0}'.format(len(label_classes)))
display(label_classes) ## You can display all to confirm this breeds are correct.
## Create data_train folder
data_train_path = os.path.join(data_path, 'data_train')
if os.path.isdir(data_train_path):
print('{0} is exist!'.format(data_train_path))
else:
os.mkdir(data_train_path)
print('{0} created!'.format(data_train_path))
## Create subfolders of data_train folder
for c in label_classes:
class_dir = os.path.join(data_train_path, c)
if not os.path.isdir(class_dir):
os.mkdir(class_dir)
print(os.listdir(data_train_path))
## Create data_val folder
data_val_path = os.path.join(data_path, 'data_val')
if os.path.isdir(data_val_path):
print('{0} is exist!'.format(data_val_path))
else:
os.mkdir(data_val_path)
print('{0} created!'.format(data_val_path))
## Create subfolder of data_val folder
for c in label_classes:
class_dir = os.path.join(data_val_path, c)
if not os.path.isdir(class_dir):
os.mkdir(class_dir)
print(os.listdir(data_val_path))
## Create folder for data_test folder
data_test_path = os.path.join(data_path, 'data_test')
if os.path.isdir(data_test_path):
print('{0} is exist!'.format(data_test_path))
else:
os.mkdir(data_test_path)
print('{0} created!'.format(data_test_path))
## Create subfolder for data_test folder
data_test_sub_path = os.path.join(data_test_path, 'test')
if not os.path.isdir(data_test_sub_path):
os.mkdir(data_test_sub_path)
print('{0} created!'.format(data_test_sub_path))
else:
print('{0} is exist!'.format(data_test_sub_path))
# Split data into train and validation
rate = 0.9
total_count = len(labels)
train_count = int(rate*total_count)
labels_train = labels[0:train_count]
labels_val = labels[train_count:]
print('total_count = {0}, train_count = {1}, val_count = {2}'.format(total_count, len(labels_train), len(labels_val)))
labels[:3]
# If images have moved to target_dir, do not move them again. Only check is first subfolder empty
target_dir = os.path.join(data_path, 'data_train', 'affenpinscher')
if os.listdir(target_dir):
print(target_dir + ' is not empty, do not need move images again.')
else:
print('start to move images into data_train.')
# Move images of train data into its correct subfolder
for i, row in labels_train.iterrows():
iamge_path = os.path.join(data_path, 'train', '{0}.jpg'.format(row[0]))
target_dir = os.path.join(data_path, 'data_train', row[1])
# In order to comfirm we get the correct file path
# print(row[0])
# print(row[1])
# print(iamge_path)
# print(target_dir)
copy2(iamge_path, target_dir)
print('finish')
# If images have moved to target_dir, do not move them again. Only check is first subfolder empty
target_dir = os.path.join(data_path, 'data_val', 'affenpinscher')
if os.listdir(target_dir):
print(target_dir + ' is not empty, do not need move images again.')
else:
print('start to move images into data_val.')
# Move images of val data into its correct subfolder
for i, row in labels_val.iterrows():
iamge_path = os.path.join(data_path, 'train', '{0}.jpg'.format(row[0]))
target_dir = os.path.join(data_path, 'data_val', row[1])
# In order to comfirm we get the correct file path
# print(row[0])
# print(row[1])
# print(iamge_path)
# print(target_dir)
copy2(iamge_path, target_dir)
print('finish')
# If images have moved to target_dir, do not move them again. Only check is first subfolder empty
target_dir = os.path.join(data_path, 'data_test', 'test')
if os.listdir(target_dir):
print(target_dir + ' is not empty, do not need move images again.')
else:
print('start to move images into data_test.')
# Move images of test data into test subfolder
test_image_pathes = os.listdir(os.path.join(data_path, 'test'))
# print(test_image_pathes)
for path in test_image_pathes:
iamge_path = os.path.join(data_path, 'test', path)
copy2(iamge_path, data_test_sub_path)
print('finish')
print('Done!')
###Output
Done!
|
5_roi/6_plot_model_comparison.ipynb | ###Markdown
Plot model comparison resultsNatalia Vélez, May 2022
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import colors
import seaborn as sns
sns.set_context('talk')
sns.set_style('white')
###Output
_____no_output_____
###Markdown
Load model comparison results:
###Code
all_models = (
pd.read_csv('outputs/model_comparison_all.csv')
.rename(columns={'Row': 'roi'})
.melt(id_vars=['roi'], var_name='model', value_name='pxp')
)
all_models['model'] = all_models.model.astype('category').cat.rename_categories({'nonparametric': 'Non-parametric', 'parametricKL': 'KL', 'parametricpTrue': 'pTrue', 'parametric': 'Full model'})
all_models.head()
param_models = (
pd.read_csv('outputs/model_comparison_parametric.csv')
.rename(columns={'Row': 'roi'})
.melt(id_vars=['roi'], var_name='model', value_name='pxp')
)
param_models['model'] = param_models.model.astype('category').cat.rename_categories({'nonparametric': 'Non-parametric', 'parametricKL': 'KL', 'parametricpTrue': 'pTrue', 'parametric': 'Full model'})
param_models.head()
all_models_expr = (
pd.read_csv('outputs/model_comparison_all_expr.csv')
.rename(columns={'Row': 'roi'})
.melt(id_vars=['roi'], var_name='model', value_name='exp_r')
)
all_models_expr['model'] = all_models_expr.model.astype('category').cat.rename_categories({'nonparametric': 'Non-parametric', 'parametricKL': 'KL', 'parametricpTrue': 'pTrue', 'parametric': 'Full model'})
all_models_expr.head()
param_models_expr = (
pd.read_csv('outputs/model_comparison_parametric_expr.csv')
.rename(columns={'Row': 'roi'})
.melt(id_vars=['roi'], var_name='model', value_name='exp_r')
)
param_models_expr['model'] = param_models_expr.model.astype('category').cat.rename_categories({'nonparametric': 'Non-parametric', 'parametricKL': 'KL', 'parametricpTrue': 'pTrue', 'parametric': 'Full model'})
param_models_expr.head()
###Output
_____no_output_____
###Markdown
Plot:
###Code
pal = ['#ccc', '#1a936f', '#FFD20A', '#6F8EC3']
fig,ax = plt.subplots(figsize=(12,4))
sns.barplot(data=all_models, x='roi', y='pxp', hue='model', ax=ax, palette=pal)
ax.legend(title='Model', bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(xlabel='', ylabel='PXP')
pal = ['#ccc', '#1a936f', '#FFD20A', '#6F8EC3']
fig,ax = plt.subplots(figsize=(12,4))
sns.barplot(data=all_models_expr, x='roi', y='exp_r', hue='model', ax=ax, palette=pal)
ax.legend(title='Model', bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(xlabel='', ylabel='Expected posterior')
fig,ax = plt.subplots(figsize=(12,4))
sns.barplot(data=param_models, x='roi', y='pxp', hue='model', ax=ax, palette = pal[1:])
ax.legend(title='Model', bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(xlabel='', ylabel='PXP')
fig,ax = plt.subplots(figsize=(12,4))
sns.barplot(data=param_models_expr, x='roi', y='exp_r', hue='model', ax=ax, palette=pal[1:])
ax.legend(title='Model', bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set(xlabel='', ylabel='Expected posterior')
###Output
_____no_output_____ |
5.3.1.2卷积神经网络-猫狗分类-使用数据增强的快速特征提取.ipynb | ###Markdown
使用数据增强的特征提取扩展 conv_base 模型,然后在输入数据上端到端地运行模型。注意:只在有 GPU 的情况下才能尝试运行。它在 CPU 上是绝对难以运行的。
###Code
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
# 在卷积基上添加一个密集连接分类器
from keras import models
from keras import layers
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
#冻结网络
conv_base.trainable = False
# 利用冻结的卷积基端到端地训练模型
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
import os
base_dir = './data/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
# 绘制训练过程中的损失曲线和精度曲线
import matplotlib.pyplot as plt
%matplotlib inline
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
samples/keras/keras example networks - performance evaluation.ipynb | ###Markdown
TH Nürnberg - Neural Network Translator - Christoph Brandl, Philipp Grandeit Evaluate the Performance of the sample Keras Neural Networks for the Neural-Network-TranslatorThis jupyter notebooks is an addition to the performance analysis in the project report. It provides all information about the models as well as the used input values and the function calls. Therefore, this notebook is suitable for repeating the performance tests. For a detailed description of the neural networks, please see the notebook "keras example networks".
###Code
from tensorflow import keras
import tensorflow as tf
import numpy as np
from sklearn.preprocessing import scale
import time
import os
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Create an out directory to save the models:
###Code
if not os.path.exists('out'):
os.makedirs('out')
###Output
_____no_output_____
###Markdown
Average Pooling 1D In this section, we will build a neural network only consisting of one single average pooling 1D layer. The created model can then be flashed on the Arduino microcontroller to evaluate the performance.
###Code
data_x = np.array([[1,2,3,4,5,6]])
data_y = np.array([[1,2,3]])
data_x = scale(data_x,axis=0)
data_x = tf.reshape(data_x,[-1,6,1])
data_y = tf.reshape(data_y,[-1,3,1])
model = keras.Sequential()
model.add(keras.layers.AveragePooling1D(pool_size=2, strides=2 ,padding='valid', input_shape = (6,1)))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data_x, data_y, epochs=1, batch_size=10)
model.save('out/avg_pool_1d.h5')
###Output
_____no_output_____
###Markdown
In the next step, we will prepare ten sample inputs to verify the results as well as the prediction performance.
###Code
input_array = np.array([[9,119,80,35,0,29],
[94,33,146,36.6,0.254,51],
[10,125,70,26,115,31.1],
[76,0,0,39.4,0.257,43],
[1,97,66,15,140,23.2],
[82,19,110,22.2,0.245,57],
[5,117,92,0,0,34.1],
[75,26,0,36,0.546,60],
[3,158,76,36,245,31.6],
[58,11,54,24.8,0.267,22]
])
###Output
_____no_output_____
###Markdown
To make it more convenient, we will prepare the input data so that we can simply copy and paste the values in the serial dialog of our Arduino IDE. We can now simply copy and paste the values inbetween of the square brackets [ ].
###Code
for i in range(0,10):
print(input_array[i].flatten().tolist())
###Output
_____no_output_____
###Markdown
In the next step, we will perform the predictionswith the trained network.
###Code
total_duration = 0
for i in range(0,10):
input = tf.reshape(input_array[i],[1,6,1])
time_before = int(round(time.time_ns() / 1000))
predictions = model.predict(input)
time_after = int(round(time.time_ns() / 1000))
print(predictions)
total_duration += time_after - time_before
print("process time in microseconds: " + str(time_after - time_before))
average_duration = float(total_duration)/10
print("total_duration: " + str(total_duration))
print("average_duration: " + str(average_duration))
###Output
_____no_output_____
###Markdown
Now we can compare the output and the duration of the prediction of our trained model with the output of the neural network translator. Average Pooling 2D In this section, we repeat the building process but instead of a 1D average pooling layer, we will build a neural network only consisting of one single average pooling 2D layer. The created model can then be again used to validate the performance of the average pooling layer.
###Code
data_x = np.array([[1,1,1], [1,1,1], [1,1,1], [1,1,1], [1,1,1]])
data_y = np.array([[[1],[1]],[[1],[1]],[[1],[1]],[[1],[1]]])
data_x = scale(data_x,axis=0)
data_x = tf.reshape(data_x,[-1,5,3,1])
data_y = tf.reshape(data_y,[-1,4,2,1])
model = keras.Sequential()
model.add(keras.layers.AveragePooling2D(pool_size=(2,2), strides=(1,1),padding='valid', input_shape = (5,3,1)))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data_x, data_y, epochs=1, batch_size=10)
model.save('out/avg_pool_2d.h5')
###Output
_____no_output_____
###Markdown
In the next step, we will again prepare ten input samples to check if the result of our build model is equal to the result of our translated model and how it performs.
###Code
input_array = np.array([[[26,1,37], [115,189,31.3], [31.3,29.6,103], [0.205,0,83], [41,36,2.2]],
[[9,119,80], [35,0,29], [94,33,146], [36.6,0.254,51], [10,125,70]],
[[26,115,31.1], [76,0,0], [39.4,0.257,43], [1,97,66], [15,140,23.2]],
[[82,19,110], [22.2,0.245,57], [5,117,92], [0,0,34.1], [75,26,0]],
[[36,0.546,60], [3,158,76], [36,245,31.6], [58,11,54], [24.8,0.267,22]],
[[1,79,60], [42,48,43.5], [0.678,23,2], [75,64,24], [55,29.7,0.37]],
[[33,8,179], [72,42,130], [32.7,0.719,36], [6,85,78], [0,0,31.2]],
[[0.382,42,0], [129,110,46], [130,67.1,0.319], [26,5,143], [78,0,0]],
[[45,0.19,47], [5,130,82], [0,0,39.1], [0.956,37,6], [87,80,0]],
[[0,23.2,0.084], [0,119,64], [18,92,34.9], [0.725,23,1], [0,74,20]]
])
###Output
_____no_output_____
###Markdown
To make it more convenient, we will prepare the input data so that we can simply copy and paste the values in the serial dialog of our Arduino IDE. We can now simply copy and paste the values inbetween of the square brackets [ ].
###Code
for i in range(0,10):
print(input_array[i].flatten().tolist())
###Output
_____no_output_____
###Markdown
In the next step we will perform the predictions with the trained network.
###Code
total_duration = 0
for i in range(0,10):
input = tf.reshape(input_array[i],[1,5,3,1])
time_before = int(round(time.time_ns() / 1000))
predictions = model.predict(input)
time_after = int(round(time.time_ns() / 1000))
print(predictions)
total_duration += time_after - time_before
print("process time in microseconds: " + str(time_after - time_before))
average_duration = float(total_duration)/10
print("total_duration: " + str(total_duration))
print("average_duration: " + str(average_duration))
###Output
_____no_output_____
###Markdown
We can now compare the output and the runtime of our trained model with the output and the runtime of the neural network translator. Max Pooling 1D In this section, we will build a neural network only consisting of one single max pooling 1D layer. The created model can then be used to validate the performance and the runtime of the max pooling layer.
###Code
data_x = np.array([[1,2,3,4,5,6]])
data_y = np.array([[1,2,3]])
data_x = scale(data_x,axis=0)
data_x = tf.reshape(data_x,[-1,6,1])
data_y = tf.reshape(data_y,[-1,3,1])
model = keras.Sequential()
model.add(keras.layers.MaxPooling1D(pool_size=2, strides=2 ,padding='valid', input_shape = (6,1)))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data_x, data_y, epochs=1, batch_size=10)
model.save('out/max_pool_1d.h5')
###Output
_____no_output_____
###Markdown
In the next step, we will prepare ten sample inputs to check if the result of our build model is equal to the result of our translated model and how it performs.
###Code
input_array = np.array([[9,119,80,35,0,29],
[94,33,146,36.6,0.254,51],
[10,125,70,26,115,31.1],
[76,0,0,39.4,0.257,43],
[1,97,66,15,140,23.2],
[82,19,110,22.2,0.245,57],
[5,117,92,0,0,34.1],
[75,26,0,36,0.546,60],
[3,158,76,36,245,31.6],
[58,11,54,24.8,0.267,22]
])
###Output
_____no_output_____
###Markdown
To make it more convenient, we will prepare the input data so that we can simply copy and paste the values in the serial dialog of our Arduino IDE. We can now simply copy and paste the values inbetween of the square brackets [ ].
###Code
for i in range(0,10):
print(input_array[i].flatten().tolist())
###Output
_____no_output_____
###Markdown
In the next step we will perform the predictions with the trained network.
###Code
total_duration = 0
for i in range(0,10):
input = tf.reshape(input_array[i],[1,6,1])
time_before = int(round(time.time_ns() / 1000))
predictions = model.predict(input)
time_after = int(round(time.time_ns() / 1000))
print(predictions)
total_duration += time_after - time_before
print("process time in microseconds: " + str(time_after - time_before))
average_duration = float(total_duration)/10
print("total_duration: " + str(total_duration))
print("average_duration: " + str(average_duration))
###Output
_____no_output_____
###Markdown
We can now compare the output and the runtime of our trained model with the output and the runtime of the neural network translator. Max Pooling 2D In this section, we repeat the building process but instead of a 1D max pooling layer we will build a neural network only consisting of one single max pooling 2D layer. The created model can then be again used to validate the implementation and the performance of the max pooling layer.
###Code
data_x = np.array([[1,1,1], [1,1,1], [1,1,1], [1,1,1], [1,1,1]])
data_y = np.array([[[1],[1]],[[1],[1]],[[1],[1]],[[1],[1]]])
data_x = scale(data_x,axis=0)
data_x = tf.reshape(data_x,[-1,5,3,1])
data_y = tf.reshape(data_y,[-1,4,2,1])
model = keras.Sequential()
model.add(keras.layers.MaxPooling2D(pool_size=(2,2), strides=(1,1),padding='valid', input_shape = (5,3,1)))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data_x, data_y, epochs=1, batch_size=10)
model.save('out/max_pool_2d.h5')
###Output
_____no_output_____
###Markdown
In the next step; we will prepare a sample of ten inputs to check if the result of our build model is equal to the result of our translated model and how it performs.
###Code
input_array = np.array([[[26,1,37], [115,189,31.3], [31.3,29.6,103], [0.205,0,83], [41,36,2.2]],
[[9,119,80], [35,0,29], [94,33,146], [36.6,0.254,51], [10,125,70]],
[[26,115,31.1], [76,0,0], [39.4,0.257,43], [1,97,66], [15,140,23.2]],
[[82,19,110], [22.2,0.245,57], [5,117,92], [0,0,34.1], [75,26,0]],
[[36,0.546,60], [3,158,76], [36,245,31.6], [58,11,54], [24.8,0.267,22]],
[[1,79,60], [42,48,43.5], [0.678,23,2], [75,64,24], [55,29.7,0.37]],
[[33,8,179], [72,42,130], [32.7,0.719,36], [6,85,78], [0,0,31.2]],
[[0.382,42,0], [129,110,46], [130,67.1,0.319], [26,5,143], [78,0,0]],
[[45,0.19,47], [5,130,82], [0,0,39.1], [0.956,37,6], [87,80,0]],
[[0,23.2,0.084], [0,119,64], [18,92,34.9], [0.725,23,1], [0,74,20]]
])
###Output
_____no_output_____
###Markdown
To make it more convenient, we will prepare the input data so that we can simply copy and paste the values in the serial dialog of our Arduino IDE. We can now simply copy and paste the values inbetween of the square brackets [ ].
###Code
for i in range(0,10):
print(input_array[i].flatten().tolist())
###Output
_____no_output_____
###Markdown
In the next step, we will perform the predictions with the trained network.
###Code
total_duration = 0
for i in range(0,10):
input = tf.reshape(input_array[i],[1,5,3,1])
time_before = int(round(time.time_ns() / 1000))
predictions = model.predict(input)
time_after = int(round(time.time_ns() / 1000))
print(predictions)
total_duration += time_after - time_before
print("process time in microseconds: " + str(time_after - time_before))
average_duration = float(total_duration)/10
print("total_duration: " + str(total_duration))
print("average_duration: " + str(average_duration))
###Output
_____no_output_____
###Markdown
We can now compare the output and the performance of our trained model with the output and the performance of the neural network translator. 2 Layer Diabates In the following, we will create a sample neural network which consists of two dense layers. Thereby, the provided diabetes.csv file is used as training and test data.
###Code
dataset = np.loadtxt("diabetes.csv", delimiter=",", skiprows=1 )
diabetes_X = dataset[:,0:8]
diabetes_Y = dataset[:,8]
diabetes_X = scale(diabetes_X,axis=0)
model = keras.Sequential()
model.add(keras.layers.Dense(8, input_dim=8, activation='sigmoid'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(diabetes_X, diabetes_Y, epochs=300, batch_size=10)
model.save('out/2_layer_diabetes.h5')
###Output
_____no_output_____
###Markdown
In the following, the predictions are performed as well as the processing time will be measured to compare the framework with the translated model created by the neural network translator for the arduino.
###Code
total_duration = 0
for i in range(1,11):
input_array = dataset[i][0:8]
input = tf.reshape(input_array,[1,8])
print("Input: " + str(np.array(input[0]).flatten().tolist()))
time_before = int(round(time.time_ns() / 1000))
predictions = model.predict(input)
time_after = int(round(time.time_ns() / 1000))
print("Preditction: " + str(predictions))
total_duration += time_after - time_before
print("Processing time in microseconds: " + str(time_after - time_before))
average_duration = float(total_duration)/10
print("total_duration: " + str(total_duration))
print("average_duration: " + str(average_duration))
###Output
_____no_output_____
###Markdown
3 Layer Diabates In addition to the 2-layer-diabetes neural network, a 3-layer-diabetes neural network can be created to test the neural network translator. Not only does this neural network have an additional dense layer, but it also uses different activation functions. Build and save the neural network model:
###Code
dataset = np.loadtxt("diabetes.csv", delimiter=",", skiprows=1 )
dataset[1:11]
model = keras.Sequential()
model.add(keras.layers.Dense(16, input_dim=8, activation='relu'))
model.add(keras.layers.Dense(8, activation="relu"))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(diabetes_X, diabetes_Y, epochs=300, batch_size=10)
model.save('out/3_layer_diabetes.h5')
###Output
_____no_output_____
###Markdown
Perform predictions and measure runtime:
###Code
total_duration = 0
for i in range(1,11):
input_array = dataset[i][0:8]
input = tf.reshape(input_array,[1,8])
print("Input: " + str(np.array(input[0]).flatten().tolist()))
time_before = int(round(time.time_ns() / 1000))
predictions = model.predict(input)
time_after = int(round(time.time_ns() / 1000))
print("Preditction: " + str(predictions))
total_duration += time_after - time_before
print("Processing time in microseconds: " + str(time_after - time_before))
average_duration = float(total_duration)/10
print("total_duration: " + str(total_duration))
print("average_duration: " + str(average_duration))
###Output
_____no_output_____ |
02 - Matplotlib Refersher.ipynb | ###Markdown
Matplotlib API refresher
###Code
% matplotlib notebook
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Matplotlib "stateful" apiModifies "current figure"
###Code
plt.plot(range(10))
plt.plot(range(10, 0, -1))
import numpy as np
plt.plot(np.sin(np.linspace(-3, 3, 20)))
###Output
_____no_output_____
###Markdown
Works also with subplot
###Code
plt.figure()
# create a subplot by specifying grid width, grid height and index:
# 2x2 grid, first plot (one-indexed)
plt.subplot(2, 2, 1)
# plt.title changes "current axes"
plt.title("first plot")
plt.plot(np.random.uniform(size=10))
plt.subplot(2, 2, 2)
# now subplot 2 is current
plt.title("second plot")
plt.plot(np.random.uniform(size=10), 'o')
plt.subplot(2, 2, 3)
plt.title("third plot")
plt.barh(range(10), np.random.uniform(size=10))
plt.subplot(2, 2, 4)
plt.title("fourth plot")
plt.imshow(np.random.uniform(size=(10, 10)))
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Object oriented / Axis oriented API is more powerfulHave an object per axes, plot directly to axes.methods modifying the axes have ``set_``!
###Code
plt.figure()
ax11 = plt.subplot(2, 2, 1)
ax21 = plt.subplot(2, 2, 2)
ax12 = plt.subplot(2, 2, 3)
ax22 = plt.subplot(2, 2, 4)
ax11.set_title("ax11")
ax21.set_title("ax21")
ax12.set_title("ax12")
ax22.set_title("ax22")
ax21.plot(np.random.randn(10))
plt.tight_layout()
## My favorite interface: plt.subplots!
fig, axes = plt.subplots(2, 2)
ax11, ax21, ax12, ax22 = axes.ravel()
ax11.set_title("ax11")
ax21.set_title("ax21")
ax12.set_title("ax12")
ax22.set_title("ax22")
###Output
_____no_output_____
###Markdown
ExerciseCreate a grid plot with one row and four columns where the first entry plots the function ``f(x) = x``, the second ``f(x)=x ** 2``, the third ``f(x)=x ** 3`` and the fourth ``f(x)=x**4``.
###Code
# Your solution
###Output
_____no_output_____
###Markdown
More fun with subplots!
###Code
import numpy as np
sin = np.sin(np.linspace(-4, 4, 100))
fig, axes = plt.subplots(2, 2)
plt.plot(sin)
fig, axes = plt.subplots(2, 2)
axes[0, 0].plot(sin)
asdf = plt.gca()
asdf.plot(sin, c='k')
###Output
_____no_output_____
###Markdown
More on plotting commands and styling
###Code
fig, ax = plt.subplots(2, 4, figsize=(10, 5))
ax[0, 0].plot(sin)
ax[0, 1].plot(range(100), sin) # same as above
ax[0, 2].plot(np.linspace(-4, 4, 100), sin)
ax[0, 3].plot(sin[::10], 'o')
ax[1, 0].plot(sin, c='r')
ax[1, 1].plot(sin, '--')
ax[1, 2].plot(sin, lw=3)
ax[1, 3].plot(sin[::10], '--o')
plt.tight_layout() # makes stuff fit - usually works
###Output
_____no_output_____
###Markdown
ExerciseSee how many lines you can put in a plot an still distinguish them (using the styles described above).How many can you distinguish if you don't use color?See the [lines bars and markers](https://matplotlib.org/gallery.htmllines_bars_and_markers) section of the matplotlib examples for more different styles
###Code
# solution
###Output
_____no_output_____
###Markdown
Scatter vs plotScatter allows modifying individual points, plot only allows modifying them all the same way:
###Code
x = np.random.uniform(size=50)
y = x + np.random.normal(0, .1, size=50)
fig, ax = plt.subplots(2, 2, figsize=(5, 5),
subplot_kw={'xticks': (), 'yticks': ()})
ax[0, 0].scatter(x, y)
ax[0, 0].set_title("scatter")
ax[0, 1].plot(x, y, 'o')
ax[0, 1].set_title("plot")
ax[1, 0].scatter(x, y, c=x-y, cmap='bwr', edgecolor='k')
ax[1, 1].scatter(x, y, c=x-y, s=np.abs(np.random.normal(scale=20, size=50)), cmap='bwr', edgecolor='k')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Imshow, interpolation, colormaps- three important kinds of color maps: sequential, diverging, qualitative- default colormap: viridis- default qualitative colormap: tab10
###Code
from matplotlib.cbook import get_sample_data
f = get_sample_data("axes_grid/bivariate_normal.npy", asfileobj=False)
arr = np.load(f)
fig, ax = plt.subplots(2, 2)
im1 = ax[0, 0].imshow(arr)
ax[0, 1].imshow(arr, interpolation='bilinear')
im3 = ax[1, 0].imshow(arr, cmap='gray')
im4 = ax[1, 1].imshow(arr, cmap='bwr', vmin=-1.5, vmax=1.5)
plt.colorbar(im1, ax=ax[0, 0])
plt.colorbar(im3, ax=ax[1, 0])
plt.colorbar(im4, ax=ax[1, 1])
###Output
_____no_output_____
###Markdown
The problem of overplotting
###Code
x1, y1 = 1 / np.random.uniform(-1000, 100, size=(2, 10000))
x2, y2 = np.dot(np.random.uniform(size=(2, 2)), np.random.normal(size=(2, 1000)))
x = np.hstack([x1, x2])
y = np.hstack([y1, y2])
plt.figure()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.scatter(x, y)
fig, ax = plt.subplots(1, 3, figsize=(10, 4),
subplot_kw={'xlim': (-1, 1),
'ylim': (-1, 1)})
ax[0].scatter(x, y)
ax[1].scatter(x, y, alpha=.1)
ax[2].scatter(x, y, alpha=.01)
plt.figure()
plt.hexbin(x, y, bins='log', extent=(-1, 1, -1, 1), gridsize=50, linewidths=0)
plt.colorbar()
###Output
_____no_output_____
###Markdown
Twinx
###Code
df = pd.DataFrame({'Math PhDs awareded (US)': {'2000': 1050,
'2001': 1010,
'2002': 919,
'2003': 993,
'2004': 1076,
'2005': 1205,
'2006': 1325,
'2007': 1393,
'2008': 1399,
'2009': 1554},
'Total revenue by arcades (US)': {'2000': 1196000000,
'2001': 1176000000,
'2002': 1269000000,
'2003': 1240000000,
'2004': 1307000000,
'2005': 1435000000,
'2006': 1601000000,
'2007': 1654000000,
'2008': 1803000000,
'2009': 1734000000}})
# could also do df.plot()
phds = df['Math PhDs awareded (US)']
revenue = df['Total revenue by arcades (US)']
years = df.index
plt.figure()
ax1 = plt.gca()
line1, = ax1.plot(years, phds)
line2, = ax1.plot(years, revenue, c='r')
plt.legend((line1, line2), ("math PhDs awarded", "revenue by arcades"))
plt.figure()
ax1 = plt.gca()
line1, = ax1.plot(years, phds)
ax2 = ax1.twinx()
line2, = ax2.plot(years, revenue, c='r')
plt.legend((line1, line2), ("math PhDs awarded", "revenue by arcades"))
ax1.set_ylabel("Math PhDs awarded")
ax2.set_ylabel("revenue by arcades")
###Output
_____no_output_____ |
example/example_read_plot_sr2.ipynb | ###Markdown
Notebook for reading SkyTEM system response filesThis notebook shows you how to read and plot SkyTEMs ssytem response (.sr2) filesfirst import some libraries, sr2_read i the library needed to read the sr2 file format
###Code
import os.path
import numpy as np
import matplotlib.pyplot as plt
import libaarhusxyz
###Output
_____no_output_____
###Markdown
Second define the system response file (.sr2) that you like to plot
###Code
input_data_dirname = "./data/"
sr2_infile_name = "00473_NOR_SkyTEM304.sr2"
sr2_infile = os.path.join(input_data_dirname, sr2_infile_name)
###Output
_____no_output_____
###Markdown
read the file
###Code
sr=libaarhusxyz.parse_sr2(sr2_infile)
###Output
_____no_output_____
###Markdown
and plot the system response
###Code
fig, ax = plt.subplots(2,1, figsize=(10,5))
ax[0].plot(sr["system_response"][:,0], sr["system_response"][:,1],".-", lw=1, ms=1)
ax[0].set_xlabel("Time [s]")
ax[0].set_ylabel("Amplitude [1]")
ax[0].set_title("file: {0}".format(sr2_infile_name))
ax[0].grid()
ax[1].plot(sr["system_response"][:,0], sr["system_response"][:,1], ".-", lw=1, ms=1)
ax[1].set_xlabel("Time [s]")
ax[1].set_ylabel("Amplitude [1]")
ax[1].set_xlim([0, sr["system_response"][:,0].max()])
ax[1].grid()
plt.tight_layout()
###Output
_____no_output_____ |
07_conversion_de_tipos_basicos.ipynb | ###Markdown
[](https://www.pythonista.io) Conversiones de tipos básicos.En este capítulo se examinarán las funciones enfocadas a convertir a objetos de tipo:* ```int```* ```float```* ```bool```* ```complex```* ```str```* ```bytes``` Obtención del tipo de dato de un objeto.La función ```type()``` regresa el tipo de dato o la clase a la que pertenece un objeto el cual es ingresado como argumento con la siguiente sintaxis:```type()```Donde: * `````` es cualquier objeto. **Ejemplos:** * Las siguientes celdas utlizarán la función ```type()``` para conocer el tipo de datos del que se trata. * La siguiente celda desplegará el tipo al que pertence el objeto ```"Hola"```, el cual corresponde a ```str```.
###Code
type("Hola")
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```b"Hola"```, el cual corresponde a ```bytes```.
###Code
type(b"Hola")
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```12```, el cual corresponde a ```int```.
###Code
type(12)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```12.```, el cual corresponde a ```float```.
###Code
type(12.)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```23j```, el cual corresponde a ```complex```.
###Code
type(23j)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```True```, el cual corresponde a ```bool```.
###Code
type(True)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```[1, 2, 3]```, el cual corresponde a ```list```.
###Code
type([1, 2, 3])
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```(1, 2, 3)```, el cual corresponde a ```tuple```.
###Code
type((1, 2, 3))
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```{1, 2, 3}```, el cual corresponde a ```set```.
###Code
type({1, 2, 3})
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```{'uno': '1'}```, el cual corresponde a ```dict```.
###Code
type({'uno': '1'})
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```None```, el cual corresponde a ```NoneType```.
###Code
type(None)
###Output
_____no_output_____
###Markdown
La función ```int()```.Esta función transforma un objeto compatible que es ingresado como argumento a un objeto tipo ```int```. La sintaxis es la siguiente:```int()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```int```. Particularidades.* Es posible convertir objetos de tipo ```str``` que representen correctamente a un número entero.* Los objetos de tipo ```float``` son truncados en la parte entera. * ```True``` es convertido en ```1```.* ```False``` es convertido en ```0```. * La función ```int()``` no es compatible con objetos tipo ```complex```, lo que originará un error de tipo ```TypeError```.* La función ```int()``` no es compatible con ```None```, lo que originará un error de tipo ```TypeError```. **Ejemplos:** * La siguiente celda convertirá en un objeto ```int``` al objeto de tipo ```bool``` que se ingresa como argumento.
###Code
int(True)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un entero negativo, por lo que la función ```int()``` podrá realizar la conversión correctamente.
###Code
int("-12")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un número real, por lo que la función ```int()``` no podrá realizar la conversión correctamente y regresará un error de tipo ```ValueError```.
###Code
int("28.8")
int(b"12")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda no contiene una representación correcta de un número entero, por lo que la función ```int()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```ValueError```.
###Code
int('Hola')
###Output
_____no_output_____
###Markdown
* Los argumentos de las siguiente celdas contienen objetos tipo ```float```, por lo que la función ```int()``` truncará el valor a enteros.
###Code
int(5.6)
int(-5.3)
###Output
_____no_output_____
###Markdown
* Los argumentos de las siguientes celdas son objetos de tipo ```complex```, por lo que la función ```int()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```TypeError```.
###Code
int(12 + 45.2j)
int(-5j)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es ```None```, el cual no puede ser representado por algún valor numérico, lo que desencadenará un error de tipo ```TypeError```.
###Code
int(None)
###Output
_____no_output_____
###Markdown
La función ```float()```. Transforma a un objeto de tipo compatible que se ingrese como argumento a uno de tipo ```float```.La sintaxis es la siguiente:```float()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```float```. Particularidades.* Puede convertir objetos de tipo ```str``` que contengan una representación correcta a un número real.* Es compatible con los objetos tipo ```int```.* ```True``` es convertido en ```1.0```* ```False``` es convertido en ```0.0```. * La función ```float()``` no es compatible con objetos tipo ```complex```, lo que originará un error de tipo ```TypeError```.* La función ```float()``` no es compatible con ```None```, lo que originará un error de tipo ```TypeError```. **Ejemplos:** * El argumento de la siguiente celda contiene una representación correcta de un número real, por lo que la función ```float()``` podrá realizar la conversión correctamente.
###Code
float("-12.6")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda no contiene una representación correcta de un número, por lo que la función ```float()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```ValueError```.
###Code
float('Hola')
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es un objeto de tipo ```int```, por lo que la función ```float()``` será capaz de realizar la conversión.
###Code
float(-5)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiete celda es ```False```, por lo que la función ```float()``` dará por resutado ```0.0```.
###Code
float(False)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiete celda es ```None```, por lo que la función ```float()``` originará un error de tipo ```TypeError```.
###Code
float(None)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es un objeto de tipo ```complex```, por lo que la función ```float()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```TypeError```.
###Code
float(12.5 + 33j)
###Output
_____no_output_____
###Markdown
La función ```complex()```.Transforma a un objeto compatible a uno de tipo ```complex``` y puede ser usada con las siguientes sintaxis: Ingresando pares numéricos como argumentos.```complex(, )```Donde:* `````` corresponde al primer argumento que se ingresa a la función ```complex()``` y puede ser un objeto de tipo ```int```, ```float``` e incluso ```bool```. Este será usado como el componente real del número complejo.* `````` corresponde al segundo argumento que se ingresa a la función ```complex()``` y puede ser un objeto de tipo ```int```, ```float``` e incluso ```bool```. Este será usado como el componente imaginario del número complejo. Su valor por defecto es ```0```. Ingresando una cadena de caracteres como argumento.```complex()```Donde:* `````` corresponde a un objetos de tipo ```str``` que contenga una representación correcta de un número ```complex```. **Ejemplos:** * Las siguientes celdas definen argumentos numéricos para la función ```complex()```.
###Code
complex(3.5, 2)
complex(8.3)
complex(False)
complex(True)
complex(True, True)
###Output
_____no_output_____
###Markdown
* Las siguientes celdas definen argumentos de tipo ```str``` cuyos contenidos son compatibles con la función ```complex()```.
###Code
complex("23+5j")
complex("23")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es la representación de una expresión que al ser evaluada da por resultado un objeto de tipo ```complex```. Sin embargo, la función ```complex()``` no tiene la capacidad de evaluar expresiones, originándose un error de tipo ```ValueError```.
###Code
complex("23 + 5j")
###Output
_____no_output_____
###Markdown
La función ```bool()```.Transforma en booleano a un objeto. ```bool()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```bool```. Particularidades.* El ```0``` equivale a ```False```.* El valor```None``` equivale a ```False```.* Una colección vacía equivale a ```False```.* Cualquier otro objeto equivale a ```True```. **Ejemplos:** * Las siguientes celdas utlizarán la función ```bool()``` ingresando diversos objetos como argumento, los cuales darán por resultado ```True```.
###Code
bool(-3)
bool(2)
bool((12, 4, 78))
bool("Hola")
bool("0")
bool("False")
###Output
_____no_output_____
###Markdown
* Las siguientes celdas utlizarán la función ```bool()``` ingresando diversos objetos como argumento, los cuales darán por resultado ```False```.
###Code
bool(0)
bool("")
bool({})
bool([])
bool(None)
###Output
_____no_output_____
###Markdown
La función ```str()```.La función ```str()```permite realizar transformaciones a objetos tipos ```str``` mediante las siguientes sintaxis: Transformación de objetos complatibles.```str()```Donde:* `````` es un objeto compatible función ```str()``` prácticamente todos los tipos básicos son compatibles con esta función y el resultado es una cadena de caracteres que representa al objeto en cuestión. Transformación de un objeto de tipo ```bytes``` o ```bytearray``` a ```str```.```str(, encoding=)```Donde:* `````` es un objeto de tipo ```bytes``` o ```bytearray``` cuyo contenido será convertido a una cadena de caracteres.* `````` corresponde al [tipo de codificación](https://docs.python.org/3/library/codecs.htmlstandard-encodings) a la que se convertirá la cadena de bytes. Por lo general es ```"utf-8"``` o ```"ascii" ```. En caso de que no se defina el atributo ```encoding```, el resultado será una representación del objeto ```bytes```. **Ejemplos:** * Las siguientes celdas regresará un objeto ```str``` con la representación de los objetos que se ingresan como argumentos.
###Code
str(True)
str(False)
str(12 + 3.5j)
str({'nombre': 'Juan'})
str(None)
str(b'Saludos')
###Output
_____no_output_____
###Markdown
* La siguiente celdas convertirá el contenido del objeto ```b'Saludos'``` en una cadena de caracteres usando la codificación *ASCII*.
###Code
str(b'Saludos', encoding='ascii')
###Output
_____no_output_____
###Markdown
* La siguiente celdas convertirá el contenido del objeto ```bytearray(b'Saludos')``` en una cadena de caracteres usando la codificación *ASCII*.
###Code
str(bytearray(b'Saludos'), encoding='ascii')
###Output
_____no_output_____
###Markdown
* La siguiente celda convertirá el contenido del objeto ```b'G\xc3\xb6del'``` en una cadena de caracteres usando la codificación *UTF-8*.
###Code
str(b'G\xc3\xb6del', encoding='utf-8')
###Output
_____no_output_____
###Markdown
* La siguiente celdas intentará convertir el contenido del objeto ```b'G\xc3\xb6del'``` en una cadena de caracteres usando la codificación *ASCII*. Sin embargo, debido a que dicha codificación no es compatible con el contenido, se desencadenar;a un error de tipo ```UnicodeDecodeError```.
###Code
str(b'G\xc3\xb6del', encoding='ascii')
###Output
_____no_output_____
###Markdown
La función ```bytes()```.Transforma a un objeto que es ingresados como argumento en un objeto de tipo ```bytes```.Las sintaxis son las siguiente:```bytes(, encoding=)```Donde:* `````` es un objeto de tipo ```str``` cuyo contenido será convertido a una cadena de bytes.* `````` corresponde al tipo de codificación a la que se convertirá la cadena de caracteres. Por lo general es ```"utf-8"``` o ```"ascii" ```. En caso de que no se defina este argumento, se desencadenará un error de tipo ```UnicodeError```. **Ejemplos:** La siguiente celda regresará la representación en bytes del objeto ```'Saludos'``` usando la codificación *ASCII*.
###Code
bytes('Saludos', encoding='ascii')
###Output
_____no_output_____
###Markdown
La siguiente celda regresará la representación en bytes del objeto ```'Gödel'``` usando la codificación *UTF-8*.
###Code
bytes('Gödel', encoding='utf-8')
###Output
_____no_output_____
###Markdown
La siguiente celda regresará la representación en bytes del objeto ```'Gödel'``` usando la codificación *Latin-1*.
###Code
bytes('Gödel', encoding='latin-1')
###Output
_____no_output_____
###Markdown
* La siguiente celda intentará realizar la representación en bytes del objeto ```'Gödel'``` usando la codificación *ASCII*. Sin embargo, dicha codificación no contiene un código para el caracter ```ö```, por lo que se generará un error de tipo ```UnicodeEncodeError```.
###Code
bytes('Gödel', encoding='ascii')
###Output
_____no_output_____
###Markdown
[](https://pythonista.io) Conversiones de tipos básicos.En este capítulo se examinarán las funciones enfocadas a convertir a objetos de tipo:* ```int```* ```float```* ```bool```* ```complex```* ```str```* ```bytes``` Obtención del tipo de dato de un objeto.La función ```type()``` regresa el tipo de dato o la clase a la que pertenece un objeto el cual es ingresado como argumento con la siguiente sintaxis:```type()```Donde: * `````` es cualquier objeto. **Ejemplos:** * Las siguientes celdas utlizarán la función ```type()``` para conocer el tipo de datos del que se trata. * La siguiente celda desplegará el tipo al que pertence el objeto ```"Hola"```, el cual corresponde a ```str```.
###Code
type("Hola")
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```b"Hola"```, el cual corresponde a ```bytes```.
###Code
type(b"Hola")
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```12```, el cual corresponde a ```int```.
###Code
type(12)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```12.```, el cual corresponde a ```float```.
###Code
type(12.)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```23j```, el cual corresponde a ```complex```.
###Code
type(23j)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```True```, el cual corresponde a ```bool```.
###Code
type(True)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```[1, 2, 3]```, el cual corresponde a ```list```.
###Code
type([1, 2, 3])
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```(1, 2, 3)```, el cual corresponde a ```tuple```.
###Code
type((1, 2, 3))
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```{1, 2, 3}```, el cual corresponde a ```set```.
###Code
type({1, 2, 3})
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```{'uno': '1'}```, el cual corresponde a ```dict```.
###Code
type({'uno': '1'})
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```None```, el cual corresponde a ```NoneType```.
###Code
type(None)
###Output
_____no_output_____
###Markdown
La función ```int()```.Esta función transforma un objeto compatible que es ingresado como argumento a un objeto tipo ```int```. La sintaxis es la siguiente:```int()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```int```. Particularidades.* Es posible convertir objetos de tipo ```str``` que representen correctamente a un número entero.* Los objetos de tipo ```float``` son truncados en la parte entera. * ```True``` es convertido en ```1```.* ```False``` es convertido en ```0```. * La función ```int()``` no es compatible con objetos tipo ```complex```, lo que originará un error de tipo ```TypeError```.* La función ```int()``` no es compatible con ```None```, lo que originará un error de tipo ```TypeError```. **Ejemplos:** * La siguiente celda convertirá en un objeto ```int``` al objeto de tipo ```bool``` que se ingresa como argumento.
###Code
int(True)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un entero negativo, por lo que la función ```int()``` podrá realizar la conversión correctamente.
###Code
int("-12")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un número real, por lo que la función ```int()``` no podrá realizar la conversión correctamente y regresará un error de tipo ```ValueError```.
###Code
int("28.8")
int(b"12")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda no contiene una representación correcta de un número entero, por lo que la función ```int()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```ValueError```.
###Code
int('Hola')
###Output
_____no_output_____
###Markdown
* Los argumentos de las siguiente celdas contienen objetos tipo ```float```, por lo que la función ```int()``` truncará el valor a enteros.
###Code
int(5.6)
int(-5.3)
###Output
_____no_output_____
###Markdown
* Los argumentos de las siguientes celdas son objetos de tipo ```complex```, por lo que la función ```int()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```TypeError```.
###Code
int(12 + 45.2j)
int(-5j)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es ```None```, el cual no puede ser representado por algún valor numérico, lo que desencadenará un error de tipo ```TypeError```.
###Code
int(None)
###Output
_____no_output_____
###Markdown
La función ```float()```. Transforma a un objeto de tipo compatible que se ingrese como argumento a uno de tipo ```float```.La sintaxis es la siguiente:```float()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```float```. Particularidades.* Puede convertir objetos de tipo ```str``` que contengan una representación correcta a un número real.* Es compatible con los objetos tipo ```int```.* ```True``` es convertido en ```1.0```* ```False``` es convertido en ```0.0```. * La función ```float()``` no es compatible con objetos tipo ```complex```, lo que originará un error de tipo ```TypeError```.* La función ```float()``` no es compatible con ```None```, lo que originará un error de tipo ```TypeError```. **Ejemplos:** * El argumento de la siguiente celda contiene una representación correcta de un número real, por lo que la función ```float()``` podrá realizar la conversión correctamente.
###Code
float("-12.6")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda no contiene una representación correcta de un número, por lo que la función ```float()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```ValueError```.
###Code
float('Hola')
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es un objeto de tipo ```int```, por lo que la función ```float()``` será capaz de realizar la conversión.
###Code
float(-5)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiete celda es ```False```, por lo que la función ```float()``` dará por resutado ```0.0```.
###Code
float(False)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiete celda es ```None```, por lo que la función ```float()``` originará un error de tipo ```TypeError```.
###Code
float(None)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es un objeto de tipo ```complex```, por lo que la función ```float()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```TypeError```.
###Code
float(12.5 + 33j)
###Output
_____no_output_____
###Markdown
La función ```complex()```.Transforma a un objeto compatible a uno de tipo ```complex``` y puede ser usada con las siguientes sintaxis: Ingresando pares numéricos como argumentos.```complex(, )```Donde:* `````` corresponde al primer argumento que se ingresa a la función ```complex()``` y puede ser un objeto de tipo ```int```, ```float``` e incluso ```bool```. Este será usado como el componente real del número complejo.* `````` corresponde al segundo argumento que se ingresa a la función ```complex()``` y puede ser un objeto de tipo ```int```, ```float``` e incluso ```bool```. Este será usado como el componente imaginario del número complejo. Su valor por defecto es ```0```. Ingresando una cadena de caracteres como argumento.```complex()```Donde:* `````` corresponde a un objetos de tipo ```str``` que contenga una representación correcta de un número ```complex```. **Ejemplos:** * Las siguientes celdas definen argumentos numéricos para la función ```complex()```.
###Code
complex(3.5, 2)
complex(8.3)
complex(False)
complex(True)
complex(True, True)
###Output
_____no_output_____
###Markdown
* Las siguientes celdas definen argumentos de tipo ```str``` cuyos contenidos son compatibles con la función ```complex()```.
###Code
complex("23+5j")
complex("23")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es la representación de una expresión que al ser evaluada da por resultado un objeto de tipo ```complex```. Sin embargo, la función ```complex()``` no tiene la capacidad de evaluar expresiones, originándose un error de tipo ```ValueError```.
###Code
complex("23 + 5j")
###Output
_____no_output_____
###Markdown
La función ```bool()```.Transforma en booleano a un objeto. ```bool()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```bool```. Particularidades.* El ```0``` equivale a ```False```.* El valor```None``` equivale a ```False```.* Una colección vacía equivale a ```False```.* Cualquier otro objeto equivale a ```True```. **Ejemplos:** * Las siguientes celdas utlizarán la función ```bool()``` ingresando diversos objetos como argumento, los cuales darán por resultado ```True```.
###Code
bool(-3)
bool(2)
bool((12, 4, 78))
bool("Hola")
bool("0")
bool("False")
###Output
_____no_output_____
###Markdown
* Las siguientes celdas utlizarán la función ```bool()``` ingresando diversos objetos como argumento, los cuales darán por resultado ```False```.
###Code
bool(0)
bool("")
bool({})
bool([])
bool(None)
###Output
_____no_output_____
###Markdown
La función ```str()```.La función ```str()```permite realizar transformaciones a objetos tipos ```str``` mediante las siguientes sintaxis: Transformación de objetos complatibles.```str()```Donde:* `````` es un objeto compatible función ```str()``` prácticamente todos los tipos básicos son compatibles con esta función y el resultado es una cadena de caracteres que representa al objeto en cuestión. Transformación de un objeto de tipo ```bytes``` o ```bytearray``` a ```str```.```str(, encoding=)```Donde:* `````` es un objeto de tipo ```bytes``` o ```bytearray``` cuyo contenido será convertido a una cadena de caracteres.* `````` corresponde al [tipo de codificación](https://docs.python.org/3/library/codecs.htmlstandard-encodings) a la que se convertirá la cadena de bytes. Por lo general es ```"utf-8"``` o ```"ascii" ```. En caso de que no se defina el atributo ```encoding```, el resultado será una representación del objeto ```bytes```. **Ejemplos:** * Las siguientes celdas regresará un objeto ```str``` con la representación de los objetos que se ingresan como argumentos.
###Code
str(True)
str(False)
str(12 + 3.5j)
str({'nombre': 'Juan'})
str(None)
str(b'Saludos')
###Output
_____no_output_____
###Markdown
* La siguiente celdas convertirá el contenido del objeto ```b'Saludos'``` en una cadena de caracteres usando la codificación *ASCII*.
###Code
str(b'Saludos', encoding='ascii')
###Output
_____no_output_____
###Markdown
* La siguiente celdas convertirá el contenido del objeto ```bytearray(b'Saludos')``` en una cadena de caracteres usando la codificación *ASCII*.
###Code
str(bytearray(b'Saludos'), encoding='ascii')
###Output
_____no_output_____
###Markdown
* La siguiente celda convertirá el contenido del objeto ```b'G\xc3\xb6del'``` en una cadena de caracteres usando la codificación *UTF-8*.
###Code
str(b'G\xc3\xb6del', encoding='utf-8')
###Output
_____no_output_____
###Markdown
* La siguiente celdas intentará convertir el contenido del objeto ```b'G\xc3\xb6del'``` en una cadena de caracteres usando la codificación *ASCII*. Sin embargo, debido a que dicha codificación no es compatible con el contenido, se desencadenar;a un error de tipo ```UnicodeDecodeError```.
###Code
str(b'G\xc3\xb6del', encoding='ascii')
###Output
_____no_output_____
###Markdown
La función ```bytes()```.Transforma a un objeto que es ingresados como argumento en un objeto de tipo ```bytes```.Las sintaxis son las siguiente:```bytes(, encoding=)```Donde:* `````` es un objeto de tipo ```str``` cuyo contenido será convertido a una cadena de bytes.* `````` corresponde al tipo de codificación a la que se convertirá la cadena de caracteres. Por lo general es ```"utf-8"``` o ```"ascii" ```. En caso de que no se defina este argumento, se desencadenará un error de tipo ```UnicodeError```. **Ejemplos:** La siguiente celda regresará la representación en bytes del objeto ```'Saludos'``` usando la codificación *ASCII*.
###Code
bytes('Saludos', encoding='ascii')
###Output
_____no_output_____
###Markdown
La siguiente celda regresará la representación en bytes del objeto ```'Gödel'``` usando la codificación *UTF-8*.
###Code
bytes('Gödel', encoding='utf-8')
###Output
_____no_output_____
###Markdown
La siguiente celda regresará la representación en bytes del objeto ```'Gödel'``` usando la codificación *Latin-1*.
###Code
bytes('Gödel', encoding='latin-1')
###Output
_____no_output_____
###Markdown
* La siguiente celda intentará realizar la representación en bytes del objeto ```'Gödel'``` usando la codificación *ASCII*. Sin embargo, dicha codificación no contiene un código para el caracter ```ö```, por lo que se generará un error de tipo ```UnicodeEncodeError```.
###Code
bytes('Gödel', encoding='ascii')
###Output
_____no_output_____
###Markdown
[](https://www.pythonista.io) Conversiones de tipos básicos.En este capítulo se examinarán las funciones enfocadas a convertir a objetos de tipo:* ```int```* ```float```* ```bool```* ```complex```* ```str```* ```bytes``` Obtención del tipo de dato de un objeto.La función ```type()``` regresa el tipo de dato o la clase a la que pertenece un objeto el cual es ingresado como argumento con la siguiente sintaxis:```type()```Donde: * `````` es cualquier objeto. **Ejemplos:** * Las siguientes celdas utlizarán la función ```type()``` para conocer el tipo de datos del que se trata. * La siguiente celda desplegará el tipo al que pertence el objeto ```"Hola"```, el cual corresponde a ```str```.
###Code
type("Hola")
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```b"Hola"```, el cual corresponde a ```bytes```.
###Code
type(b"Hola")
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```12```, el cual corresponde a ```int```.
###Code
type(12)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```12.```, el cual corresponde a ```float```.
###Code
type(12.)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```23j```, el cual corresponde a ```complex```.
###Code
type(23j)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```True```, el cual corresponde a ```bool```.
###Code
type(True)
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```[1, 2, 3]```, el cual corresponde a ```list```.
###Code
type([1, 2, 3])
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```(1, 2, 3)```, el cual corresponde a ```tuple```.
###Code
type((1, 2, 3))
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```{1, 2, 3}```, el cual corresponde a ```set```.
###Code
type({1, 2, 3})
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```{'uno': '1'}```, el cual corresponde a ```dict```.
###Code
type({'uno': '1'})
###Output
_____no_output_____
###Markdown
* La siguiente celda desplegará el tipo al que pertence el objeto ```None```, el cual corresponde a ```NoneType```.
###Code
type(None)
###Output
_____no_output_____
###Markdown
La función ```int()```.Esta función transforma un objeto compatible que es ingresado como argumento a un objeto tipo ```int```. La sintaxis es la siguiente:```int()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```int```. Particularidades.* Es posible convertir objetos de tipo ```str``` que representen correctamente a un número entero.* Los objetos de tipo ```float``` son truncados en la parte entera. * ```True``` es convertido en ```1```.* ```False``` es convertido en ```0```. * La función ```int()``` no es compatible con objetos tipo ```complex```, lo que originará un error de tipo ```TypeError```.* La función ```int()``` no es compatible con ```None```, lo que originará un error de tipo ```TypeError```. **Ejemplos:** * La siguiente celda convertirá en un objeto ```int``` al objeto de tipo ```bool``` que se ingresa como argumento.
###Code
int(True)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un entero negativo, por lo que la función ```int()``` podrá realizar la conversión correctamente.
###Code
int("-12")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un número real, por lo que la función ```int()``` no podrá realizar la conversión correctamente y regresará un error de tipo ```ValueError```.
###Code
int("28.8")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda contiene una representación correcta de un número entero, por lo que la función ```int()``` podrá realizar la conversión correctamente.
###Code
int(b"12")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda no contiene una representación correcta de un número entero, por lo que la función ```int()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```ValueError```.
###Code
int('Hola')
###Output
_____no_output_____
###Markdown
* Los argumentos de las siguientes celdas contienen objetos tipo ```float```, por lo que la función ```int()``` truncará el valor a enteros.
###Code
int(5.6)
int(-5.3)
###Output
_____no_output_____
###Markdown
* Los argumentos de las siguientes celdas son objetos de tipo ```complex```, por lo que la función ```int()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```TypeError```.
###Code
int(12 + 45.2j)
int(-5j)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es ```None```, el cual no puede ser representado por algún valor numérico, lo que desencadenará un error de tipo ```TypeError```.
###Code
int(None)
###Output
_____no_output_____
###Markdown
La función ```float()```. Transforma a un objeto de tipo compatible que se ingrese como argumento a uno de tipo ```float```.La sintaxis es la siguiente:```float()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```float```. Particularidades.* Puede convertir objetos de tipo ```str``` que contengan una representación correcta a un número real.* Es compatible con los objetos tipo ```int```.* ```True``` es convertido en ```1.0```* ```False``` es convertido en ```0.0```. * La función ```float()``` no es compatible con objetos tipo ```complex```, lo que originará un error de tipo ```TypeError```.* La función ```float()``` no es compatible con ```None```, lo que originará un error de tipo ```TypeError```. **Ejemplos:** * El argumento de la siguiente celda contiene una representación correcta de un número real, por lo que la función ```float()``` podrá realizar la conversión correctamente.
###Code
float("-12.6")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda no contiene una representación correcta de un número, por lo que la función ```float()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```ValueError```.
###Code
float('Hola')
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es un objeto de tipo ```int```, por lo que la función ```float()``` será capaz de realizar la conversión.
###Code
float(-5)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiete celda es ```False```, por lo que la función ```float()``` dará por resutado ```0.0```.
###Code
float(False)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiete celda es ```None```, por lo que la función ```float()``` originará un error de tipo ```TypeError```.
###Code
float(None)
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es un objeto de tipo ```complex```, por lo que la función ```float()``` será incapaz de realizar la conversión y desencadenará un error de tipo ```TypeError```.
###Code
float(12.5 + 33j)
###Output
_____no_output_____
###Markdown
La función ```complex()```.Transforma a un objeto compatible a uno de tipo ```complex``` y puede ser usada con las siguientes sintaxis: Ingresando pares numéricos como argumentos.```complex(, )```Donde:* `````` corresponde al primer argumento que se ingresa a la función ```complex()``` y puede ser un objeto de tipo ```int```, ```float``` e incluso ```bool```. Este será usado como el componente real del número complejo.* `````` corresponde al segundo argumento que se ingresa a la función ```complex()``` y puede ser un objeto de tipo ```int```, ```float``` e incluso ```bool```. Este será usado como el componente imaginario del número complejo. Su valor por defecto es ```0```. Ingresando una cadena de caracteres como argumento.```complex()```Donde:* `````` corresponde a un objetos de tipo ```str``` que contenga una representación correcta de un número ```complex```. **Ejemplos:** * Las siguientes celdas definen argumentos numéricos para la función ```complex()```.
###Code
complex(3.5, 2)
complex(8.3)
complex(False)
complex(True)
complex(True, True)
###Output
_____no_output_____
###Markdown
* Las siguientes celdas definen argumentos de tipo ```str``` cuyos contenidos son compatibles con la función ```complex()```.
###Code
complex("23+5j")
complex("23")
###Output
_____no_output_____
###Markdown
* El argumento de la siguiente celda es la representación de una expresión que al ser evaluada da por resultado un objeto de tipo ```complex```. Sin embargo, la función ```complex()``` no tiene la capacidad de evaluar expresiones, originándose un error de tipo ```ValueError```.
###Code
complex("23 + 5j")
23 + 5j
###Output
_____no_output_____
###Markdown
La función ```bool()```.Transforma en booleano a un objeto. ```bool()```Donde: * `````` es el objeto que será convertido a un objeto de tipo ```bool```. Particularidades.* El ```0``` equivale a ```False```.* El valor```None``` equivale a ```False```.* Una colección vacía equivale a ```False```.* Cualquier otro objeto equivale a ```True```. **Ejemplos:** * Las siguientes celdas utlizarán la función ```bool()``` ingresando diversos objetos como argumento, los cuales darán por resultado ```True```.
###Code
bool(-3)
bool(2)
bool((12, 4, 78))
bool("Hola")
bool("0")
bool("False")
###Output
_____no_output_____
###Markdown
* Las siguientes celdas utlizarán la función ```bool()``` ingresando diversos objetos como argumento, los cuales darán por resultado ```False```.
###Code
bool(0)
bool("")
bool({})
bool([])
bool(None)
###Output
_____no_output_____
###Markdown
La función ```str()```.La función ```str()```permite realizar transformaciones a objetos tipos ```str``` mediante las siguientes sintaxis: Transformación de objetos complatibles.```str()```Donde:* `````` es un objeto compatible función ```str()``` prácticamente todos los tipos básicos son compatibles con esta función y el resultado es una cadena de caracteres que representa al objeto en cuestión. Transformación de un objeto de tipo ```bytes``` o ```bytearray``` a ```str```.```str(, encoding=)```Donde:* `````` es un objeto de tipo ```bytes``` o ```bytearray``` cuyo contenido será convertido a una cadena de caracteres.* `````` corresponde al [tipo de codificación](https://docs.python.org/3/library/codecs.htmlstandard-encodings) a la que se convertirá la cadena de bytes. Por lo general es ```"utf-8"``` o ```"ascii" ```. En caso de que no se defina el atributo ```encoding```, el resultado será una representación del objeto ```bytes```. **Ejemplos:** * Las siguientes celdas regresará un objeto ```str``` con la representación de los objetos y expresiones que se ingresan como argumentos.
###Code
str(True)
str(False)
str(15 / 2)
str(12 + 3.5j)
str({'nombre': 'Juan'})
str(None)
str(b'Saludos')
###Output
_____no_output_____
###Markdown
* La siguiente celdas convertirá el contenido del objeto ```b'Saludos'``` en una cadena de caracteres usando la codificación *ASCII*.
###Code
str(b'Saludos', encoding='ascii')
###Output
_____no_output_____
###Markdown
* La siguiente celdas convertirá el contenido del objeto ```bytearray(b'Saludos')``` en una cadena de caracteres usando la codificación *ASCII*.
###Code
str(bytearray(b'Saludos'), encoding='ascii')
###Output
_____no_output_____
###Markdown
* La siguiente celda convertirá el contenido del objeto ```b'G\xc3\xb6del'``` en una cadena de caracteres usando la codificación *UTF-8*.
###Code
str(b'G\xc3\xb6del', encoding='utf-8')
###Output
_____no_output_____
###Markdown
* La siguiente celdas intentará convertir el contenido del objeto ```b'G\xc3\xb6del'``` en una cadena de caracteres usando la codificación *ASCII*. Sin embargo, debido a que dicha codificación no es compatible con el contenido, se desencadenar;a un error de tipo ```UnicodeDecodeError```.
###Code
str(b'G\xc3\xb6del', encoding='ascii')
###Output
_____no_output_____
###Markdown
La función ```bytes()```.Transforma a un objeto que es ingresados como argumento en un objeto de tipo ```bytes```.Las sintaxis son las siguiente:```bytes(, encoding=)```Donde:* `````` es un objeto de tipo ```str``` cuyo contenido será convertido a una cadena de bytes.* `````` corresponde al tipo de codificación a la que se convertirá la cadena de caracteres. Por lo general es ```"utf-8"``` o ```"ascii" ```. En caso de que no se defina este argumento, se desencadenará un error de tipo ```UnicodeError```. **Ejemplos:** La siguiente celda regresará la representación en bytes del objeto ```'Saludos'``` usando la codificación *ASCII*.
###Code
bytes('Saludos', encoding='ascii')
###Output
_____no_output_____
###Markdown
La siguiente celda regresará la representación en bytes del objeto ```'Gödel'``` usando la codificación *UTF-8*.
###Code
bytes('Gödel', encoding='utf-8')
###Output
_____no_output_____
###Markdown
La siguiente celda regresará la representación en bytes del objeto ```'Gödel'``` usando la codificación *Latin-1*.
###Code
bytes('Gödel', encoding='latin-1')
###Output
_____no_output_____
###Markdown
* La siguiente celda intentará realizar la representación en bytes del objeto ```'Gödel'``` usando la codificación *ASCII*. Sin embargo, dicha codificación no contiene un código para el caracter ```ö```, por lo que se generará un error de tipo ```UnicodeEncodeError```.
###Code
bytes('Gödel', encoding='ascii')
###Output
_____no_output_____ |
M_commandlineparser.ipynb | ###Markdown
Parsing the command line argumentsAs an example `ls` by default displays the contents of the directory. It can be given a positional argument, so called because the command knows what to do based only on where they are with respect to the `cp source final` function. The difference with optional arguments is that they will have a default value, which implies that it is not strictly necessary to provide one. We can change the behavior by providing optional `ls -l` arguments. A very useful thing to do is to invoke the help text to find out how `ls --help` = `ls -h` works. In a script this can be achieved simply by using the sys module or the more customizable argparse. We create two scripts for testing (called parsing_sys.py, parsing_argparse.py in the same folder). If we now go to python we have that there is a function inside the `sys` module which is `sys.argv`. It generates a list with the arguments passed to the script:
###Code
import sys
print("This is the name of the script: ", sys.argv[0]) # 0 is the script name
print("Number of arguments: ", len(sys.argv))
print("The arguments are: " , str(sys.argv))
###Output
_____no_output_____
###Markdown
If we invoke `python script.py "hello friend"` and include a new argument, it would have index 1 `sys.argv[1]`. This can be extended to any desired number of arguments, but mainly for simple scripts.
###Code
print("This is the name of the script: ", sys.argv[0]) # 0 is the script name
print("Number of arguments (including script name): ", len(sys.argv))
print("The arguments are: \n")
n = len(sys.argv)
for i in range(1, n):
print(sys.argv[i], end = " ")
Sum = 0
for i in range(1, n):
Sum += int(sys.argv[i])
print("\ntotal sum:", Sum)
# python script.py 2 3 4
###Output
_____no_output_____
###Markdown
Thanks to `argparse` we can handle the absence or presence of arguments, especially when some of them are required to work. The `-h` argument help will always be available, and will give us help to manage the program. A code in which a positional argument is created, where by default it is treated as a string and told to print it:
###Code
import argparse
parser = argparse.ArgumentParser() # object creation
parser.add_argument("echo", help="echo the string you use here") # accepted options and description
args = parser.parse_args() # method to return data
print(args.echo)
# called as: python file "hi there"
parser = argparse.ArgumentParser(description="calculate X to the power of Y") # object creation and description (-h)
group = parser.add_mutually_exclusive_group() # add mutually exclusive arg
group.add_argument("-v", "--verbose", action="store_true") # optional arguments have two names
group.add_argument("-q", "--quiet", action="store_true")
parser.add_argument("x", type=int, help="the base") # positional arguments, order matter if there are more than one
parser.add_argument("y", type=int, help="the exponent")
args = parser.parse_args() # method to return data
answer = args.x**args.y
if args.quiet:
print(answer)
elif args.verbose:
print(f"{args.x} to the power {args.y} equals {answer}")
else:
print(f"{args.x}^{args.y} == {answer}")
# python script.py 2 3 -v
# order of optional args wont matter, but it will for positional args
###Output
_____no_output_____ |
2/z3+4.ipynb | ###Markdown
z3
###Code
import re
import numpy as np
word2tag = dict()
tag2word = dict()
def stringNorm(sent, num=False):
regex = re.compile(f'[,\.!?:;\'{"0-9" if not num else ""}\*\-“…\(\)„”—»«–––=\[\]’]')
return regex.sub('',sent.lower())
with open("data/supertags.txt") as tags:
for line in tags:
word, tag = stringNorm(line, num=True).split()
word2tag[word] = tag
if tag in tag2word:
tag2word[tag].append(word)
else:
tag2word[tag] = [word]
def bigrams2unigrams(bigrams):
return {w1: sum([float(bigrams[w1][w2]) for w2 in bigrams[w1]])/2 for w1 in bigrams}
unigrams = bigrams2unigrams(bigrams)
def getProb(word):
if word in unigrams:
return unigrams[word]*10
return 0.001
def getRandWord(words):
probs = np.array([getProb(x) for x in words])
probs = probs / np.sum(probs)
return str(np.random.choice(words, 1, p=probs)[0])
def getRandomSentence(model):
sent = stringNorm(model).split()
sentCodes = [(word2tag[x] if x in word2tag
else (print(f"***Nie znaleziono takiego słowa: {x}***"),
word2tag[('^' + x)[-3:]])[1])
for x in sent]
altWords = [tag2word[x] for x in sentCodes]
newSentence = [getRandWord(x) for x in altWords]
return ' '.join(newSentence)
sentenceList = [
"Mały Piotruś spotkał w niewielkiej restauracyjce wczoraj poznaną koleżankę.",
"Zbyt zabawne było powstrzymywanie się od śmiechu żeby to zrobić",
"Dawno nie piła tak dobrego, świeżego mleka",
"Niestety komputer postanowił odmówić posłuszeństwa",
"Mama Darka Czuje Ból Na pewno Musi Wypoczywać. ",
"Idę do sklepu kupić nowe spodnie",
"Kyeloger wywalił renderowanie filmów"
]
for model in sentenceList:
print(model,'\n',getRandomSentence(model),'\n')
###Output
Mały Piotruś spotkał w niewielkiej restauracyjce wczoraj poznaną koleżankę.
bezpłatny pan zachował za nierdzewnej rzece oraz związaną infekcję
Zbyt zabawne było powstrzymywanie się od śmiechu żeby to zrobić
przykład polskie zostało podjęcie do z sojuszu się to przeczytać
Dawno nie piła tak dobrego, świeżego mleka
w nie piła tutaj międzynarodowego całego rozwiązania
Niestety komputer postanowił odmówić posłuszeństwa
w wkład przyjął nacisnąć doradztwa
Mama Darka Czuje Ból Na pewno Musi Wypoczywać.
dziewczyna kontrolera chodzi ból a a musi sięgać
Idę do sklepu kupić nowe spodnie
ciążę o urzędu uzyskać konkretne spodnie
***Nie znaleziono takiego słowa: kyeloger***
***Nie znaleziono takiego słowa: renderowanie***
Kyeloger wywalił renderowanie filmów
projekt otrzymał nie murów
###Markdown
z4
###Code
import os
os.sys.path.append('../1/')
from z2 import loader
bigrams = loader('../1/poleval_2grams.txt', cut = 10)
def getProb2(word, prev):
if word in bigrams[prev]:
# print('aa: ', bigrams[prev][word])
return bigrams[prev][word]
return 0.001
def getRandWord2(words, prev):
probs = np.array([getProb2(x, prev) for x in words]).astype(float)
probs = probs / np.sum(probs)
return str(np.random.choice(words, 1, p=probs)[0])
def getRandomSentenceBi(model):
sent = stringNorm(model).split()
sentCodes = [(word2tag[x] if x in word2tag
else (print(f"***Nie znaleziono takiego słowa: {x}***"),
word2tag[('^' + x)[-3:]])[1])
for x in sent]
altWords = [tag2word[x] for x in sentCodes]
i = 0
newSentence = []
while i < len(altWords):
grammWords = set(altWords[i])
bigramWords = {y for y in bigrams}
goodWords = grammWords.intersection(bigramWords)
x = getRandWord(list(goodWords))
newSentence.append(x)
i+=1
while x in bigrams and i < len(altWords):
grammWords = set(altWords[i])
bigramWords = {y for y in bigrams[x]}
goodWords = grammWords.intersection(bigramWords)
if len(goodWords) <1:
break
x = getRandWord2(list(goodWords), x)
newSentence.append(x)
i+=1
newSentence.append('|')
return ' '.join(newSentence[:-1])
for model in sentenceList:
print(model,'\n',getRandomSentenceBi(model),'\n')
###Output
Mały Piotruś spotkał w niewielkiej restauracyjce wczoraj poznaną koleżankę.
ważny partner | poinformował o niewystarczającej liczbie około | kierowaną | izbę
Zbyt zabawne było powstrzymywanie się od śmiechu żeby to zrobić
spadek | średnie | zostało rozporządzenie z i stylu w to ograniczyć
Dawno nie piła tak dobrego, świeżego mleka
do nie piła | czy innego miejscowego środowiska
Niestety komputer postanowił odmówić posłuszeństwa
za akt | przeniósł | powiedzieć | dziennikarstwa
Mama Darka Czuje Ból Na pewno Musi Wypoczywać.
ustawa | trenera | chodzi | ból w znacznie | musi podlegać
Idę do sklepu kupić nowe spodnie
idę do rozłamu | stwierdzić | odwoławcze | spodnie
***Nie znaleziono takiego słowa: kyeloger***
***Nie znaleziono takiego słowa: renderowanie***
Kyeloger wywalił renderowanie filmów
system sprawdził | nie milionów
|
Keras_Starter_kit_Python_V1.ipynb | ###Markdown
> Keras_Starter_Kit_Python_draft_v1Keras is a powerful and easy-to-use deep learning library for various ML/DL frameworks like Theano and TensorFlow that provides a high-level neural networks API to develop and evaluate deep learning models.Below cheetsheet contains some Basic Keras setup/coding examples for ML/ DL problems[**Keras Cheetsheet** ](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Keras_Cheat_Sheet_Python.pdf)
###Code
# Simple Keras Example
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
data = np.random.random((1000,100))
labels = np.random.randint(2,size=(1000,1))
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(data,labels,epochs=10,batch_size=32)
predictions = model.predict(data)
# Data
Your data needs to be stored as NumPy arrays or as a list of NumPy arrays.
Ideally, you split the data in training and test sets, for which you can also resort to the train_test_split module of sklearn.cross_validation.
## Keras Data Sets
from keras.datasets import boston_housing, mnist, cifar10, imdb
(x_train,y_train),(x_test,y_test) = mnist.load_data()
(x_train2,y_train2),(x_test2,y_test2) = boston_housing.load_data()
(x_train3,y_train3),(x_test3,y_test3) = cifar10.load_data()
(x_train4,y_train4),(x_test4,y_test4) = imdb.load_data(num_words=20000)
num_classes = 10
## Other dataset import from website
from urllib.request import urlopen
data = np.loadtxt(urlopen("http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"),delimiter=",")
X = data[:,0:8]
y = data [:,8]
# Preprocessing
#Preprocess input data (examples)
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)
X_train = X_train.astype(‘float32’)
X_test = X_test.astype(‘float32’)
X_train /= 255
X_test /= 255
#Preprocess class labels
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
## Sequence Padding
from keras.preprocessing import sequence
x_train4 = sequence.pad_sequences(x_train4,maxlen=80)
x_test4 = sequence.pad_sequences(x_test4,maxlen=80)
## One-Hot Encoding
from keras.utils import to_categorical
Y_train = to_categorical(y_train, num_classes)
Y_test = to_categorical(y_test, num_classes)
Y_train3 = to_categorical(y_train3, num_classes)
Y_test3 = to_categorical(y_test3, num_classes)
## Multi-Hotencoding (to-do)
## Train And Test Sets
from sklearn.model_selection import train_test_split
X_train5, X_test5, y_train5, y_test5 = train_test_split(X, y, test_size=0.33, random_state=42)
## Standardization/Normalization
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(x_train2)
standardized_X = scaler.transform(x_train2)
standardized_X_test = scaler.transform(x_test2)
# Model Architecture
## Sequential Model
from keras.models import Sequential
model = Sequential()
model2 = Sequential()
model3 = Sequential()
## Multi-Layer Perceptron (MLP)
### Binary Classification
from keras.layers import Dense
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
### Multi-Class Classification
from keras.layers import Dropout
model.add(Dense(512,activation='relu',input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10,activation='softmax'))
### Regression
model.add(Dense(64, activation='relu', input_dim=train_data.shape[1]))
model.add(Dense(1))
## Convolutional Neural Network (CNN)
from keras.layers import Activation, Conv2D, MaxPooling2D, Flatten
model2.add(Conv2D(32, (3,3), padding='same', input_shape=x_train.shape[1:]))
model2.add(Activation('relu'))
model2.add(Conv2D(32, (3,3)))
model2.add(Activation('relu'))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(Dropout(0.25))
model2.add(Conv2D(64, (3,3), padding='same'))
model2.add(Activation('relu'))
model2.add(Conv2D(64, (3, 3)))
model2.add(Activation('relu'))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(Dropout(0.25))
model2.add(Flatten())
model2.add(Dense(512))
model2.add(Activation('relu'))
model2.add(Dropout(0.5))
model2.add(Dense(num_classes))
model2.add(Activation('softmax'))
## Recurrent Neural Network (RNN)
from keras.klayers import Embedding,LSTM
model3.add(Embedding(20000,128))
model3.add(LSTM(128,dropout=0.2,recurrent_dropout=0.2))
model3.add(Dense(1,activation='sigmoid'))
# Defining model architecture (generally)
model = Sequential()
-----
model.add(Convolution2D(32, 3, 3, activation=’relu’, input_shape=(1,28,28)))
model.add(Convolution2D(32, 3, 3, activation=’relu’))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
-----
model.add(Flatten())
model.add(Dense(128, activation=’relu’))
model.add(Dropout(0.5))
model.add(Dense(10, activation=’softmax’))
# Inspect Model
## Model output shape
model.output_shape
## Model summary representation
model.summary()
## Model configuration
model.get_config()
## List all weight tensors in the model
model.get_weights()
# Compile Model
model.compile(loss=’categorical_crossentropy’,
optimizer=’adam’,
metrics=[‘accuracy’])
## Multi-Layer Perceptron (MLP)
### MLP: Binary Classification
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
### MLP: Multi-Class Classification
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
### MLP: Regression
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
## Recurrent Neural Network (RNN)
model3.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Model Training
model3.fit(x_train4, y_train4, batch_size=32, epochs=15, verbose=1, validation_data=(x_test4, y_test4))
# Fit model on training data (same as above)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=10, verbose=1)
# Evaluate model on test data
score = model.evaluate(X_test, Y_test, verbose=0)
# Evaluate Your Model's Performance
score = model3.evaluate(x_test, y_test, batch_size=32)
# Prediction
model3.predict(x_test4, batch_size=32)
model3.predict_classes(x_test4,batch_size=32)
# Save/Reload Models
from keras.models import load_model
model3.save('model_file.h5')
my_model = load_model('my_model.h5')
# Model Fine-Tuning
## Optimization Parameters
from keras.optimizers import RMSprop
opt = RMSprop(lr=0.0001, decay=1e-6)
model2.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
## Early Stopping
from keras.callbacks import EarlyStopping
early_stopping_monitor = EarlyStopping(patience=2)
model3.fit(x_train4,y_train4,batch_size=32,epochs=15,validation_data=(x_test4, y_test4), callbacks[early_stopping_monitor])
#Auto-Keras (to-do)
###Output
_____no_output_____ |
OpenML_Project_German_Credit_Quality.ipynb | ###Markdown
Implementación de modelo de ML para Creditos en Alemania
###Code
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sbs
import sklearn
import openml as oml
sbs.set_style('darkgrid')
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
api = open("""C:\\Users\\Alfred\\PycharmProjects\\Clases_Programacion\\OpenML_API.txt""").read()
oml.config.apikey = api
###Output
_____no_output_____
###Markdown
Author: Dr. Hans Hofmann Source: [UCI](https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)) - 1994 Please cite: [UCI](https://archive.ics.uci.edu/ml/citation_policy.html)German Credit dataset This dataset classifies people described by a set of attributes as good or bad credit risks.This dataset comes with a cost matrix: ``` Good Bad (predicted) Good 0 1 (actual) Bad 5 0 ```It is worse to class a customer as good when they are bad (5), than it is to class a customer as bad when they are good (1). Attribute description 1. Status of existing checking account, in Deutsche Mark. 2. Duration in months 3. Credit history (credits taken, paid back duly, delays, critical accounts) 4. Purpose of the credit (car, television,...) 5. Credit amount 6. Status of savings account/bonds, in Deutsche Mark. 7. Present employment, in number of years. 8. Installment rate in percentage of disposable income 9. Personal status (married, single,...) and sex 10. Other debtors / guarantors 11. Present residence since X years 12. Property (e.g. real estate) 13. Age in years 14. Other installment plans (banks, stores) 15. Housing (rent, own,...) 16. Number of existing credits at this bank 17. Job 18. Number of people being liable to provide maintenance for 19. Telephone (yes,no) 20. Foreign worker (yes,no)
###Code
task=oml.tasks.get_task(31)
dataset = task.get_dataset()
data = dataset.get_data()
data,_,_,_ = data
target= data['class']
data.drop('class', axis=1, inplace=True)
#data = pd.read_csv('dataset_31_credit-g.csv')
#target = data['class'].values
#data.drop('class', axis=1, inplace=True)
#data
target[:5]
###Output
_____no_output_____
###Markdown
Reinterpretamos target para que bad sea 0. Por lo tanto, los coeficientes positivos, son "buenos", y los negativos son "malos".
###Code
target=np.where(target=='good',1,0)
target[:10]
###Output
_____no_output_____
###Markdown
Analisis de datos
###Code
data.columns
###Output
_____no_output_____
###Markdown
Analisis univariado checking_status
###Code
data.checking_status.value_counts()
###Output
_____no_output_____
###Markdown
Variable cualitativa. **Expectativa:** no checking y saldo menor a cero sean negativos en cuanto a calidad cred. duration
###Code
plt.hist(data.duration, bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
Distribucion asimétrica, agrupada en la parte corta de la curva. **Expectativa:** las duraciones mas largas tendrán menos problemas en calidad credit_history
###Code
data.credit_history.value_counts()
###Output
_____no_output_____
###Markdown
purpose
###Code
data['purpose'].value_counts()
###Output
_____no_output_____
###Markdown
**Expectativa:** Aplicaciones comerciales y de inversión (eg. educación) tendrán menos riesgo que creditos de consumo.
###Code
data['purpose'].value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
credit_amount
###Code
plt.hist(data.credit_amount, bins=10)
###Output
_____no_output_____
###Markdown
savings_status
###Code
data.savings_status.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
employment
###Code
data.employment.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
**Expectativa:** mayor antiguedad debería correlacionar mejor con estabilidad laboral, por lo tanto, con calidad crediticia. installment_commitment
###Code
data.installment_commitment.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
% del salario ocupado por cada amortización**Expectativa:** mas es peor calidad. personal_status
###Code
data.personal_status.value_counts()
###Output
_____no_output_____
###Markdown
other_parties
###Code
data.other_parties.value_counts()
###Output
_____no_output_____
###Markdown
residence_since
###Code
data.residence_since.value_counts()
###Output
_____no_output_____
###Markdown
**Expectativa:** mayor antiguedad debería correlacionar con calidad crediticia. property_magnitude
###Code
data.property_magnitude.value_counts()
###Output
_____no_output_____
###Markdown
age
###Code
plt.hist(data.age, bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
**Expectativa:** mayor edad debería correlacionar con riqueza, y por ende, calidad crediticia. other_payment_plans
###Code
data.other_payment_plans.value_counts()
###Output
_____no_output_____
###Markdown
housing
###Code
data.housing.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
**Expectativa:** alquilar genera salidas de flujos de fondos mensuales y se asocian con menor riqueza, lo que impacta en calidad crediticia. Adicionalmente, propietarios podrían refinanciar con creditos hipotecarios. existing_credits
###Code
data.existing_credits.value_counts()
###Output
_____no_output_____
###Markdown
job
###Code
data.job.value_counts()
###Output
_____no_output_____
###Markdown
num_dependents
###Code
data.num_dependents.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
own_telephone
###Code
data.own_telephone.value_counts()
###Output
_____no_output_____
###Markdown
foreign_worker
###Code
data.foreign_worker.value_counts()
###Output
_____no_output_____
###Markdown
**Expectativa:** Ser extranjero correlaciona con los costos de instalarse y falta de redes. Por tanto, es negativo para la calidad crediticia. Analisis multivariado Duration vs. Credit amount
###Code
fig, ax = plt.subplots(1,1, figsize=(10,10))
pd.plotting.scatter_matrix(data[['duration', 'credit_amount']], c=data.age ,ax=ax, grid=True)
from sklearn.linear_model import LinearRegression
linreg = LinearRegression().fit(data.duration.values.reshape(-1, 1), data.credit_amount)
line = np.linspace(data.duration.min(), data.duration.max())
linrel = linreg.intercept_ + linreg.coef_*line
fig, ax = plt.subplots(1,1, figsize=(10,10))
plt.scatter(data.duration, data.credit_amount)
plt.plot(line, linrel, label='Relacion_lineal', c='k')
plt.xlabel('Duration')
plt.ylabel('Credit_amount')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Vemos una relación relativamente fuerte y positiva entre el monto y la duración del crédito. Duration vs. age
###Code
fig, ax = plt.subplots(1,1, figsize=(10,10))
pd.plotting.scatter_matrix(data[['credit_amount', 'age']] ,ax=ax, grid=True)
linreg = LinearRegression().fit(data.age.values.reshape(-1, 1), data.credit_amount)
line = np.linspace(data.age.min(), data.age.max())
linrel = linreg.intercept_ + linreg.coef_*line
fig, ax = plt.subplots(1,1, figsize=(10,10))
plt.scatter(data.age, data.credit_amount)
plt.plot(line, linrel, label='Relacion_lineal', c='k')
plt.xlabel('Age')
plt.ylabel('Credit_amount')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Estas dos variables no parecen tener una relación super fuerte Transformacion de dataset
###Code
data_dummies = pd.get_dummies(data)
data_dummies.shape
data_dummies.head().T
###Output
_____no_output_____
###Markdown
Set de Prueba, validacion y entrenamiento
###Code
X_trainvalid, X_test, y_trainvalid, y_test = train_test_split(data_dummies.values, target, random_state=0)
X_train, X_valid, y_train, y_valid = train_test_split(X_trainvalid, y_trainvalid, random_state=1)
print(target.shape)
print(X_trainvalid.shape)
print(X_train.shape)
print(X_test.shape)
###Output
(1000,)
(750, 63)
(562, 63)
(250, 63)
###Markdown
Vamos a usar X_trainvalid como entrenamiento para los modelos que no requieran validacion de parámetros. Para los que los requieran, usaremos X_train como entrenamiento, X_valid para validar Regresion Logistica Primero probamos con una regresion logistica dada la transparencia del modelo
###Code
from sklearn.linear_model import LogisticRegressionCV
logreg = LogisticRegressionCV(max_iter=10000).fit(X_trainvalid, y_trainvalid)
pred_v_test = np.array([logreg.predict(X_test), y_test])
###Output
_____no_output_____
###Markdown
Coeficientes Los coeficientes no son inmediatamente interpretables en tamaño de efecto porque las características o datos tenían diferentes escalas (eg. edad entre 20-70 y variables dummy entre 0-1) Predicciones
###Code
plt.matshow(pred_v_test[:,:30], cmap='viridis')
plt.yticks([0,1],['Prediction', 'True value'])
plt.colorbar()
print('El puntaje de la regresion logistica en el es de',logreg.score(X_test, y_test)*100, '%')
from sklearn.metrics import confusion_matrix
conf=pd.DataFrame(confusion_matrix(y_test, logreg.predict(X_test)),
index=['Verdadero Malo', 'Verdadero Bueno'],
columns=['Prediccion Malo', 'Prediccion Bueno'])
conf
###Output
_____no_output_____
###Markdown
Funcion de perdidaLa documentación del dataset incluye una función de pérdida asimétrica para los falsos positivos (5 puntos) y los falsos negativos (1 punto).
###Code
def loss_funct(target, prediction):
conf = confusion_matrix(target, prediction)
return conf[0,1]*5 + conf[1,0]*1
print('Funcion de perdida de Reg Logistica:',loss_funct(y_test, logreg.predict(X_test)))
###Output
Funcion de perdida de Reg Logistica: 213
###Markdown
Modelizacion con preprocesamiento
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_trainvalid)
X_trainvalid_scaled = scaler.transform(X_trainvalid)
X_test_scaled = scaler.transform(X_test)
logreg = LogisticRegressionCV(max_iter=10000).fit(X_trainvalid_scaled, y_trainvalid)
pred_v_test = np.array([logreg.predict(X_test_scaled), y_test])
plt.matshow(pred_v_test[:,:30], cmap='viridis')
plt.yticks([0,1],['Prediction', 'True value'])
plt.colorbar()
print('El puntaje de la regresion logistica escalada es de',logreg.score(X_test_scaled, y_test)*100, '%')
print('Funcion de perdida de Reg Logistica Escalada:',loss_funct(y_test, logreg.predict(X_test_scaled)))
###Output
El puntaje de la regresion logistica escalada es de 74.0 %
Funcion de perdida de Reg Logistica Escalada: 221
###Markdown
\large Con este preprocesamiento perdemos puntaje pero ganamos interpretabilidad. Al estar preprocesado el dataset, podemos comparar la magnitud de los efectos por la magnitud del coeficiente.
###Code
fig, ax = plt.subplots(1,1,figsize=(10,10))
ax.matshow((logreg.coef_).reshape(7,-1), cmap='viridis')
print('Parametro con menor valor:',data_dummies.columns[np.argmin(logreg.coef_)])
print('Menor parámetro:', format(np.min(logreg.coef_),'.3f'))
###Output
Parametro con menor valor: duration
Menor parámetro: -0.344
###Markdown
Este resultado va en contra de la expectativa anterior de que a más duración mejor calidad crediticia
###Code
print('Parametro con mayor valor:',data_dummies.columns[np.argmax(logreg.coef_)])
print('Mayor parámetro:',format(np.max(logreg.coef_),'.3f'))
data_dummies["""checking_status_no checking"""].value_counts()
###Output
_____no_output_____
###Markdown
Este resultado parece contraintuitivo: quienes no tienen cuenta tienen mejor calidad crediticia. Posible explicación es que vengan clientes de otros bancos. Quienes cambien de banco para un prestamo, probablemente tengan habitos financieros que llevan a la solvencia. Modelización con optimización de metaparámetros
###Code
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import GridSearchCV
pipe = make_pipeline(StandardScaler(), LogisticRegressionCV(max_iter=5000))
param_grid = {'logisticregressioncv__Cs':[1, 5, 10, 50, 100]}
grid = GridSearchCV(pipe, param_grid=param_grid, cv=5)
grid.fit(X_train, y_train)
grid.best_params_
pipe = make_pipeline(StandardScaler(), LogisticRegressionCV(Cs = 5, max_iter=5000))
pipe.fit(X_trainvalid, y_trainvalid)
print('El puntaje de la regresion logistica optimizada es de',pipe.score(X_test, y_test)*100, '%')
print('Funcion de perdida de Reg Logistica optimizada:',loss_funct(y_test, pipe.predict(X_test)))
###Output
El puntaje de la regresion logistica optimizada es de 73.2 %
Funcion de perdida de Reg Logistica optimizada: 215
###Markdown
Calibracion de precision Dada la funcion de pérdida asimétrica a calibrar la precision del modelo: se define como $$precision=\frac{VB}{(VB + FB)}$$Donde VB y FB son Verdadero Bueno y Falso Bueno.Es equivalente a calibrar el error tipo I
###Code
from sklearn.metrics import precision_recall_curve
pipe = make_pipeline(StandardScaler(), LogisticRegressionCV(Cs = 5, max_iter=5000))
pipe.fit(X_train, y_train)
precision, recall, thresholds = precision_recall_curve(y_valid, pipe.decision_function(X_valid))
close_zero = np.argmin(np.abs(thresholds))
plt.figure(figsize=(10,7))
plt.plot(precision, recall, label='curva_precision_recall')
plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, fillstyle='none', label='umbral_cero')
plt.legend()
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.title('Curva de Precision vs. Recall')
loss_results = []
for threshold in thresholds:
loss_results.append(loss_funct(y_valid, pipe.decision_function(X_valid) > threshold))
plt.figure(figsize=(10,7))
plt.plot(thresholds, loss_results)
plt.plot(thresholds[close_zero], loss_results[close_zero], 'o', markersize=12, fillstyle='none', label='Selección original')
plt.xlabel('Thresholds')
plt.ylabel('Loss results')
plt.legend()
###Output
_____no_output_____
###Markdown
En el conjunto de validación, un umbral entre 1 y 1.5 parece generar un minimo en la funcion de perdida. Usamos 1.2 como calibración manual
###Code
calib_thresh = 1.2
print('Funcion de perdida de Reg Logistica con umbral calibrado:',loss_funct(y_test, pipe.decision_function(X_test)>calib_thresh))
pred_v_test = np.array([logreg.predict(X_test_scaled), y_test])
pred_v_test_calib = np.array([pipe.decision_function(X_test)>calib_thresh, y_test])
fig, axes = plt.subplots(2, 1, figsize=(10,4))
ax1, ax2 = axes.ravel()
ax1.matshow(pred_v_test[:,:50], cmap='viridis')
ax1.set_yticks([0,1])
ax1.set_yticklabels(['Prediction', 'True value'])
ax1.set_title('Logreg sin calibracion de umbral')
ax2.matshow(pred_v_test_calib[:,:50], cmap='viridis')
ax2.set_yticks([0,1])
ax2.set_yticklabels(['Prediction', 'True value'])
ax2.set_title('Logreg de umbral calibrado')
###Output
_____no_output_____
###Markdown
Vemos de inmediato que la regresión con umbral calibrado comete muchos más errores! Esto se explica por la asimetría entre calificar como bueno a un mal acreedor (Falso positivo, pérdidas) vs. calificar como malo a un buen acreedor (Falso negativo, lucro cesante). Coeficientes
###Code
fig, ax = plt.subplots(1,1,figsize=(10,10))
ax.matshow((pipe.named_steps['logisticregressioncv'].coef_).reshape(7,-1), cmap='viridis')
print('Parametro con menor valor:',data_dummies.columns[np.argmin(pipe.named_steps['logisticregressioncv'].coef_)])
print('Menor parámetro:', format(np.min(pipe.named_steps['logisticregressioncv'].coef_),'.3f'))
print('Parametro con mayor valor:',data_dummies.columns[np.argmax(pipe.named_steps['logisticregressioncv'].coef_)])
print('Mayor parámetro:',format(np.max(pipe.named_steps['logisticregressioncv'].coef_),'.3f'))
###Output
Parametro con mayor valor: checking_status_no checking
Mayor parámetro: 0.241
###Markdown
Comparación con modelización alternativa: Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100, random_state=2)
rf.fit(X_train, y_train)
print('El puntaje del random forest es de', rf.score(X_test, y_test)*100, '%')
print('Funcion de perdida del random forest:', loss_funct(y_test, rf.predict(X_test)))
###Output
El puntaje del random forest es de 74.4 %
Funcion de perdida del random forest: 248
###Markdown
Sin optimización, este modelo muestra una función de pérdida similar a los peores modelos.
###Code
precision_rf, recall_rf, thresholds_rf = precision_recall_curve(y_valid, rf.predict_proba(X_valid)[:,1])
close_zero = np.argmin(np.abs(thresholds))
close_zero_rf = np.argmin(np.abs(thresholds_rf-0.5))
plt.figure(figsize=(10,7))
plt.plot(precision_rf, recall_rf, label='curva_random_forest')
plt.plot(precision_rf[close_zero_rf], recall_rf[close_zero_rf], 'o', markersize=10, fillstyle='none', label='umbral_cero')
plt.plot(precision, recall, label='curva_logreg')
plt.plot(precision[close_zero], recall[close_zero], 'o', markersize=10, fillstyle='none', label='umbral_cero')
plt.legend()
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.title('Curva de precision vs. recall')
###Output
_____no_output_____
###Markdown
Encontramos que el modelo Logistico parece superar sustancialmente al modelo de random forest casi de forma estricta. Evaluamos las funciones de perdida respectivas.
###Code
loss_results_rf = []
for threshold in thresholds_rf:
loss_results_rf.append(loss_funct(y_valid, rf.predict_proba(X_valid)[:,1] > threshold))
plt.figure(figsize=(10,7))
plt.plot(thresholds, loss_results, label='Logreg_Loss_function')
plt.plot(thresholds[close_zero], loss_results[close_zero], 'o', markersize=12, fillstyle='none', label='Selección original Logreg')
plt.plot(thresholds_rf, loss_results_rf, label='RF_Loss_function')
plt.plot(thresholds_rf[close_zero_rf], loss_results_rf[close_zero_rf], 'o', markersize=12, fillstyle='none', label='Selección original RF')
plt.xlabel('Thresholds')
plt.ylabel('Loss results')
plt.legend()
###Output
_____no_output_____
###Markdown
Parece existir un punto en donde el modelo de Random Forest supera al Logistico. Mi expectativa es que la diferencia corresponde sólo a la muestra de validación.
###Code
calib_thresh = thresholds[np.argmin(loss_results)]
print('Funcion de perdida de Reg Logistica con umbral calibrado:',loss_funct(y_test, pipe.decision_function(X_test)>calib_thresh))
calib_thresh_rf = thresholds_rf[np.argmin(loss_results_rf)]
print('Funcion de perdida de Random Forest con umbral calibrado:',loss_funct(y_test, rf.predict_proba(X_test)[:,1]>calib_thresh_rf))
###Output
Funcion de perdida de Reg Logistica con umbral calibrado: 152
Funcion de perdida de Random Forest con umbral calibrado: 150
###Markdown
Encontramos que la diferencia en el conjunto de prueba es de 0.6%, lo cual puede ser perfectamente atribuido al azar. Sin embargo, contrario a lo mostrado por las curvas de precision y recall, se logra un rendimiento similar para ambos modelos. Expectativas 1) No checking y saldo menor a cero sean negativos en cuanto a calidad cred.
###Code
check_subzero= data_dummies.columns.tolist().index("""checking_status_<0""")
print('Coeficiente Checking status <0: ',
format(pipe.named_steps['logisticregressioncv'].coef_[:,check_subzero][0], '.3f'))
check_no= data_dummies.columns.tolist().index("""checking_status_no checking""")
print('Coeficiente no Checking:',
format(pipe.named_steps['logisticregressioncv'].coef_[:, check_no][0],'.3f'))
###Output
Coeficiente Checking status <0: -0.231
Coeficiente no Checking: 0.241
###Markdown
**MIXTO:** Solo se corrobora una de estas dos expectativas. 2) Las duraciones mas largas tendrán menos problemas en calidad
###Code
dur_no= data_dummies.columns.tolist().index('duration')
print('Coeficiente Duration:',
format(pipe.named_steps['logisticregressioncv'].coef_[:, dur_no][0],'.3f'))
###Output
Coeficiente Duration: -0.196
###Markdown
**FALSO** 3) Aplicaciones comerciales y de inversión (eg. educación) tendrán menos riesgo que creditos de consumo. **FALSO**
###Code
for feature in data_dummies.columns.tolist():
print(feature, 'value:')
print(format(pipe.named_steps['logisticregressioncv'].coef_[:, data_dummies.columns.tolist().index(feature)][0],'.3f'))
print('\n')
###Output
duration value:
-0.196
credit_amount value:
-0.099
installment_commitment value:
-0.139
residence_since value:
-0.007
age value:
0.047
existing_credits value:
-0.035
num_dependents value:
-0.014
checking_status_<0 value:
-0.231
checking_status_0<=X<200 value:
-0.048
checking_status_>=200 value:
0.028
checking_status_no checking value:
0.241
credit_history_no credits/all paid value:
-0.128
credit_history_all paid value:
-0.130
credit_history_existing paid value:
-0.032
credit_history_delayed previously value:
0.010
credit_history_critical/other existing credit value:
0.143
purpose_new car value:
-0.110
purpose_used car value:
0.182
purpose_furniture/equipment value:
0.041
purpose_radio/tv value:
0.020
purpose_domestic appliance value:
-0.044
purpose_repairs value:
-0.071
purpose_education value:
-0.093
purpose_vacation value:
0.000
purpose_retraining value:
0.089
purpose_business value:
-0.030
purpose_other value:
0.024
savings_status_<100 value:
-0.071
savings_status_100<=X<500 value:
-0.041
savings_status_500<=X<1000 value:
0.017
savings_status_>=1000 value:
0.042
savings_status_no known savings value:
0.087
employment_unemployed value:
0.009
employment_<1 value:
-0.082
employment_1<=X<4 value:
-0.022
employment_4<=X<7 value:
0.095
employment_>=7 value:
0.009
personal_status_male div/sep value:
-0.002
personal_status_female div/dep/mar value:
-0.034
personal_status_male single value:
0.017
personal_status_male mar/wid value:
0.024
personal_status_female single value:
0.000
other_parties_none value:
-0.021
other_parties_co applicant value:
-0.070
other_parties_guarantor value:
0.088
property_magnitude_real estate value:
0.128
property_magnitude_life insurance value:
-0.044
property_magnitude_car value:
-0.028
property_magnitude_no known property value:
-0.073
other_payment_plans_bank value:
-0.065
other_payment_plans_stores value:
-0.030
other_payment_plans_none value:
0.074
housing_rent value:
-0.093
housing_own value:
0.054
housing_for free value:
0.032
job_unemp/unskilled non res value:
0.048
job_unskilled resident value:
-0.022
job_skilled value:
0.056
job_high qualif/self emp/mgmt value:
-0.073
own_telephone_none value:
-0.034
own_telephone_yes value:
0.034
foreign_worker_yes value:
-0.053
foreign_worker_no value:
0.053
|
Practise Assignment 3.ipynb | ###Markdown
Q1: Think of at least three kinds of your favorite pizza. Store these pizza names in a list, and then use a for loop to print the name of each pizza.
###Code
pizza_list = ["Chicken Pizza", "Beaf Pizza", "Mutton Pizza"]
## for loop
for x in pizza_list:
print(x + " is my favorite")
###Output
Chicken Pizza is my favorite
Beaf Pizza is my favorite
Mutton Pizza is my favorite
###Markdown
Q2 Start with your last question , Modify your for loop to print a sentence using the name of the pizzainstead of printing just the name of the pizza. For each pizza you shouldhave one line of output containing a simple statement like I like pepperonipizza.
###Code
for y in pizza_list:
print(f"I like {y}")
###Output
I like Chicken Pizza
I like Beaf Pizza
I like Mutton Pizza
###Markdown
Q3: Use a for loop to print the numbers from 1 to 20,inclusive.
###Code
for num in range(20):
num = num + 1
print(num, end='\n')
###Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
###Markdown
Q4: Use the third argument of the range() function to make a listof the odd numbers from 1 to 20. Use a for loop to print each number.
###Code
for i in range(1,20,2):
print(i, end='\n')
###Output
1
3
5
7
9
11
13
15
17
19
###Markdown
Q5: Make a list of the multiples of 3 from 3 to 30. Use a for loop toprint the numbers in your list.
###Code
multiple_of_three = []
for i in range(1,30,1):
i = i + 1
if i%3 == 0:
multiple_of_three.append(i)
for j in multiple_of_three:
print(j , end='\n')
###Output
3
6
9
12
15
18
21
24
27
30
###Markdown
Q6: A number raised to the third power is called a cube. For example,the cube of 2 is written as 2**3 in Python. Make a list of the first 10 cubes (thatis, the cube of each integer from 1 through 10), and use a for loop to print outthe value of each cube
###Code
for i in range(10):
i+=1
print(f"The cube of {i} is {i**3}", end='\n')
###Output
The cube of 1 is 1
The cube of 2 is 8
The cube of 3 is 27
The cube of 4 is 64
The cube of 5 is 125
The cube of 6 is 216
The cube of 7 is 343
The cube of 8 is 512
The cube of 9 is 729
The cube of 10 is 1000
###Markdown
Q7: Make a python program that conatains your nine favourite dishes in a list called foods. Print the message, The first three items in the list are:. Then use a slice to print the first three items from that program’s list. Print the message, Three items from the middle of the list are: Use a slice to print three items from the middle of the list. Print the message, The last three items in the list are: Use a slice to print the last three items in the list.
###Code
foods = ["Biryani", "Pulaow", "Zarda", "Qorma", "Karahi", "Nihari", "Rasmalai", "Faloodah", "Icecream"]
print(f'The First Three Items Are from Rice {foods[slice(0,3)]}')
print(f"The middle three items are spicy {foods[slice(3,6)]}")
print(f"The last three items are sweets {foods[slice(6,9)]}")
###Output
The First Three Items Are from Rice ['Biryani', 'Pulaow', 'Zarda']
The middle three items are spicy ['Qorma', 'Karahi', 'Nihari']
The last three items are sweets ['Rasmalai', 'Faloodah', 'Icecream']
###Markdown
Q8: Start with your program from your last Question8. Make a copy of the list of foods, and call it friend_foods. Then, do the following: Add a new dish to the original list. Add a different dish to the list friend_foodss. Prove that you have two separate lists. Print the message, My favorite pizzas are: and then use a for loop to print the first list. Print the message, My friend’s favorite foods are:, and then use a for loop to print the second list. NOTE: Make sure each new dish is stored in the appropriate list.
###Code
friend_foods = foods.copy()
foods.append("Whitw Karahi")
friend_foods.append("Qulfi")
if set(friend_foods) == set(foods):
print("Both sets are same")
else:
print("Both sets are different")
print(f"My favorite dishes are {foods}")
print(f"My friend favorite dishes are {friend_foods}")
###Output
_____no_output_____
###Markdown
Q9: Take a user input from console line.Store it in a variable called Alien_color.If the alien’s color is red, print a statement that the player just earned 5 points for shooting the alien.If the alien’s color isn’t green, print a statement that the player just earned 10 points.If the alien's color isn't red or green , print a statment :, Alien is no more.....
###Code
user_input = input("Enter red or green : ")
if user_input == 'red':
print('The player just earned 5 points for shooting the alien.')
elif user_input == 'green':
print("The player just earned 10 points.")
else:
print("Alien is no more.....")
user_input = input("Enter red or green : ")
if user_input == 'red':
print('The player just earned 5 points for shooting the alien.')
elif user_input == 'green':
print("The player just earned 10 points.")
else:
print("Alien is no more.....")
user_input = input("Enter red or green : ")
if user_input == 'red':
print('The player just earned 5 points for shooting the alien.')
elif user_input == 'green':
print("The player just earned 10 points.")
else:
print("Alien is no more.....")
###Output
Enter red or green : yellow
Alien is no more.....
###Markdown
Q10: Write an if-elif-else chain that determines a person’sstage of life. Set a value for the variable age, and then: • If the person is less than 2 years old, print a message that the person is a baby.• If the person is at least 2 years old but less than 4, print a message that the person is a toddler.• If the person is at least 4 years old but less than 13, print a message that the person is a kid.• If the person is at least 13 years old but less than 20, print a message that the person is a teenager.• If the person is at least 20 years old but less than 65, print a message that the person is an adult.• If the person is age 65 or older, print a message that the person is an elder.
###Code
age = 25
if age >= 65:
print('the person is an elder.')
elif age >= 20:
print('the person is an adult.')
elif age >= 13:
print('the person is a teenager.')
elif age >= 4:
print('the person is a kid.')
elif age >= 2:
print('the person is a toddler.')
else:
print('the person is a baby.')
age = 77
if age >= 65:
print('the person is an elder.')
elif age >= 20:
print('the person is an adult.')
elif age >= 13:
print('the person is a teenager.')
elif age >= 4:
print('the person is a kid.')
elif age >= 2:
print('the person is a toddler.')
else:
print('the person is a baby.')
age = 17
if age >= 65:
print('the person is an elder.')
elif age >= 20:
print('the person is an adult.')
elif age >= 13:
print('the person is a teenager.')
elif age >= 4:
print('the person is a kid.')
elif age >= 2:
print('the person is a toddler.')
else:
print('the person is a baby.')
age = 9
if age >= 65:
print('the person is an elder.')
elif age >= 20:
print('the person is an adult.')
elif age >= 13:
print('the person is a teenager.')
elif age >= 4:
print('the person is a kid.')
elif age >= 2:
print('the person is a toddler.')
else:
print('the person is a baby.')
age = 3
if age >= 65:
print('the person is an elder.')
elif age >= 20:
print('the person is an adult.')
elif age >= 13:
print('the person is a teenager.')
elif age >= 4:
print('the person is a kid.')
elif age >= 2:
print('the person is a toddler.')
else:
print('the person is a baby.')
age = 1.5
if age >= 65:
print('the person is an elder.')
elif age >= 20:
print('the person is an adult.')
elif age >= 13:
print('the person is a teenager.')
elif age >= 4:
print('the person is a kid.')
elif age >= 2:
print('the person is a toddler.')
else:
print('the person is a baby.')
###Output
the person is a baby.
###Markdown
Q11: Do the following to create a program that simulates how websites ensure that everyone has a unique username.• Make a list of five or more usernames called current_users.• Make another list of five usernames called new_users. Make sure one or two of the new usernames are also in the current_users list.• Loop through the new_users list to see if each new username has already been used. If it has, print a message that the person will need to enter a new username. If a username has not been used, print a message saying that the username is available.• Make sure your comparison is case insensitive. If 'John' has been used, 'JOHN' should not be accepted.
###Code
current_users = ['umaima', 'dua', 'sada', 'himna', 'areeba']
new_users = ['shakeel', 'shahid', 'shabbir', 'sada', 'umaima']
## umaima and sada are common in both lists
for new_user in new_users:
if new_user in current_users:
print(f'the person {new_user} need to enter another username')
else:
print(f'The {new_user} is availabe')
###Output
The shakeel is availabe
The shahid is availabe
The shabbir is availabe
the person sada need to enter another username
the person umaima need to enter another username
###Markdown
Q12: Use a dictionary to store information about a person you know.Store their first name, last name, age, and the city in which they live. You should have keys such as first_name, last_name, age, and city. Print each piece of information stored in your dictionary
###Code
info = {'first_name' : 'Shakeel', 'last_name' : 'Haider' , 'age' : '25' , 'city' : 'karachi' }
print(info['first_name'])
print(info['last_name'])
print(info['age'])
print(info['city'])
###Output
Shakeel
Haider
25
karachi
###Markdown
Q13: Starts with your last question 12 , loop through the dictionary’s keys and values.When you’re sure that your loop works, add five more Python terms to yourdictionary . When you run your program again, these new words and meaningsshould automatically be included in the output.
###Code
print('Iterating over keys')
for keys in info:
print(keys)
print('Iterating over values')
for values in info.values():
print(values)
print('iterating over key value pair')
for key, value in info.items():
print(f'{key} : {value}')
info['qualification'] = 'BS Software Engineering'
info['experience'] = 'Fresh'
info['focus_area'] = 'Machine Learning'
info['career'] = 'Zero'
print('iterating over key value pair again')
for key, value in info.items():
print(f'{key} : {value}')
###Output
iterating over key value pair again
first_name : Shakeel
last_name : Haider
age : 25
city : karachi
qualification : BS Software Engineering
experience : Fresh
focus_area : Machine Learning
career : Zero
###Markdown
Q14: Make a dictionary containing three major rivers and the countryeach river runs through. One key-value pair might be 'nile': 'egypt'. • Use a loop to print a sentence about each river, such as The Nile runsthrough Egypt.NOTE: use upper case through keys and values.
###Code
major_rivers = {'nile' : 'egypt', 'yangtze' : 'china', 'mississippi' : 'usa'}
for key, value in major_rivers.items():
print(f"The {key.upper()} runs through {value.upper()}")
###Output
The NILE runs through EGYPT
The YANGTZE runs through CHINA
The MISSISSIPPI runs through USA
###Markdown
Q15: Make several dictionaries, where the name of each dictionary is thename of a pet. In each dictionary, include the kind of animal and the owner’sname. Store these dictionaries in a list called pets. Next, loop through your listand as you do print everything you know about each pet.
###Code
dog = {'name' : 'dog', 'kind' : 'friendly', 'owner' : 'zakir bhai'}
cat = {'name' : 'cat','kind' : 'friendly', 'owner' : 'shakeel'}
panther = {'name' : 'panther','kind' : 'non-friendly', 'owner' : 'shumail'}
lion = {'name' : 'lion','kind' : 'non-friendly', 'owner' : 'shujaat'}
parrot = {'name' : 'parrot','kind' : 'bird' , 'owner' : 'suleman'}
pets = [dog, cat, panther, lion, parrot]
for dic in pets:
for key, value in dic.items():
print(f" {key.title()} : {value.title()}")
print('',end='\n')
###Output
Name : Dog
Kind : Friendly
Owner : Zakir Bhai
Name : Cat
Kind : Friendly
Owner : Shakeel
Name : Panther
Kind : Non-Friendly
Owner : Shumail
Name : Lion
Kind : Non-Friendly
Owner : Shujaat
Name : Parrot
Kind : Bird
Owner : Suleman
|
study1/4_deep_learning/CNN.ipynb | ###Markdown
3. Convolutional Nueral Network 1) import modules
###Code
import tensorflow as tf
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
###Output
_____no_output_____
###Markdown
2) define placeholder for INPUT & LABELS
###Code
INPUT = tf.placeholder(tf.float32, [None, 28*28])
LABELS = tf.placeholder(tf.int32, [None])
###Output
_____no_output_____
###Markdown
3) define cnn model
###Code
#prediction = convolutional_neural_network(input=IMAGES, output_dim=10)
###Output
_____no_output_____
###Markdown
- define convolutional_neural_network function with tf.nn.conv2d, tf.nn.max_pool, tf.nn.tanh
###Code
def convolutional_neural_network(input, output_dim=None):
image = tf.reshape(input, [-1, 28, 28, 1]) #batch_size x width x height x channel
# Conv layer1
# Filter가 Weight 역활을 함
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 20], stddev=0.1))
b_conv1 = tf.Variable(tf.zeros([20]))
h_conv1 = tf.nn.conv2d(
image,
W_conv1,
strides=[1, 1, 1, 1],
padding='SAME') + b_conv1 # batch_sizex28x28x20
fmap_conv1 = tf.nn.tanh(h_conv1)
# Pooling(Max) layer1
# k_size = [one_image, width, hegiht, one_channel]
h_pool1 = tf.nn.max_pool(
fmap_conv1,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME'
) # batch_sizex14x14x20
# Conv layer2
# Filter가 Weight 역활을 함
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 20, 50], stddev=0.1))
b_conv2 = tf.Variable(tf.zeros([50]))
h_conv2 = tf.nn.conv2d(
h_pool1,
W_conv2,
strides=[1, 1, 1, 1],
padding='SAME') + b_conv2 # batch_sizex14x14x50
fmap_conv2 = tf.nn.tanh(h_conv2)
# Pooling(Max) layer2
h_pool2 = tf.nn.max_pool(
fmap_conv2,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME'
) # batch_sizex7x7x50
h_pool2_flat = tf.reshape(h_pool2, [-1, 50 * 7 * 7]) # batch_sizex(7x7x50)
# fully-connected layer1
W_fc1 = tf.Variable(tf.truncated_normal([50 * 7 * 7, 500], stddev=0.1))
b_fc1 = tf.Variable(tf.zeros([500]))
h_fc1 = tf.nn.tanh(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) # batch_sizex500
# fully-connected layer2
W_fc2 = tf.Variable(tf.truncated_normal([500, output_dim], stddev=0.1))
b_fc2 = tf.Variable(tf.zeros([output_dim]))
output = tf.matmul(h_fc1, W_fc2) + b_fc2 #batch_sizex10
return output
prediction = convolutional_neural_network(INPUT, output_dim=10)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=LABELS, logits=prediction
)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
###Output
_____no_output_____
###Markdown
4) load data
###Code
mnist = input_data.read_data_sets("./data/", one_hot=True)
###Output
WARNING:tensorflow:From <ipython-input-6-f659c5e1ce47>:1: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /home/ray/multicamp/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /home/ray/multicamp/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting ./data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From /home/ray/multicamp/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting ./data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From /home/ray/multicamp/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting ./data/t10k-images-idx3-ubyte.gz
Extracting ./data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /home/ray/multicamp/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
###Markdown
5) start training - set training parameters : batch size, learning rate, total loop
###Code
BATCH_SIZE = 100
LEARNING_RATE = 0.01
TOTAL_LOOP = 10000
###Output
_____no_output_____
###Markdown
- arrA = [[0,0,0,0,1],[0,1,0,0,0]] - np.where(arrA) => ([0,1], [4,1]) - ref) https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.where.html?highlight=numpy%20wherenumpy.where
###Code
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for loop in range(1, TOTAL_LOOP + 1):
train_images, train_labels = mnist.train \
.next_batch(BATCH_SIZE)
train_labels = np.where(train_labels)[1]
_, loss = sess.run(
[optimizer, cost],
feed_dict={
INPUT: train_images,
LABELS: train_labels
}
)
if loop % 500 == 0 or loop == 0:
print("loop: %05d,"%(loop), "loss:", loss)
print("Training Finished! (loss : " + str(loss) + ")")
###Output
loop: 00500, loss: 0.24762183
loop: 01000, loss: 0.19355531
loop: 01500, loss: 0.15968311
loop: 02000, loss: 0.13976234
loop: 02500, loss: 0.06908678
loop: 03000, loss: 0.09957447
loop: 03500, loss: 0.10375062
loop: 04000, loss: 0.05489205
loop: 04500, loss: 0.120919704
loop: 05000, loss: 0.078065485
loop: 05500, loss: 0.018431403
loop: 06000, loss: 0.03395952
loop: 06500, loss: 0.082293175
loop: 07000, loss: 0.05262528
loop: 07500, loss: 0.10878975
loop: 08000, loss: 0.071632974
loop: 08500, loss: 0.06618172
loop: 09000, loss: 0.07681943
loop: 09500, loss: 0.012724737
loop: 10000, loss: 0.023088682
Training Finished! (loss : 0.023088682)
###Markdown
6) test performance - test image shape: (100, 784) - test label shape: (100, 10) - arrB = [[0, 1, 2],[3, 4, 5]] - np.argmax(arrB) => 5 - np.argmax(arrB, axis=0) => [1, 1, 1] - np.argmax(arrB, axis=1) => [2, 2] - ref) https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.argmax.html
###Code
TEST_SAMPLE_SIZE = 100
TEST_NUMBER = 5
accuracy_save = dict()
for number in range(1, 1+TEST_NUMBER):
test_images, test_labels = mnist.test \
.next_batch(TEST_SAMPLE_SIZE)
pred_result = sess.run(
prediction,
feed_dict={INPUT: test_images}
)
pred_number = np.argmax(pred_result, axis=1) # 100x1
label_number = np.where(test_labels)[1] #100x1
accuracy_save[number] = np.sum(pred_number == label_number)
print("Accuracy:", accuracy_save)
print("Total mean Accuracy:",
np.mean(list(accuracy_save.values()))
)
###Output
Accuracy: {1: 95, 2: 100, 3: 99, 4: 99, 5: 99}
Total mean Accuracy: 98.4
|
notebook/5_Deployment/5_2_Intro_to_TorchScript_tutorial_jp.ipynb | ###Markdown
「TorchScript入門」===============================================================【原題】INTRODUCTION TO TORCHSCRIPT【原著者】James Reed ([email protected]), Michael Suo ([email protected])【元URL】https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html【翻訳】電通国際情報サービスISID AIトランスフォーメーションセンター 小川 雄太郎【日付】2020年10月24日【チュトーリアル概要】本チュートリアルでは、PyTorchモデルのモデル構造について簡単に説明したあと、TorchScriptと呼ばれる手法について、概要と使用方法を解説します。--- 本チュートリアルでは、TorchScriptについて紹介します。TorchScriptは``nn.Module``から構成されたPyTochモデルの中間表現であり、C++のような高速な実行環境でモデルを実行することができます。 本チュートリアルでは、以下の内容を解説します。
1. PyTorchで作られモデルの基礎的内容、とくに
- Module(モジュール)
- ``forward`` 関数
- モジュールを階層的に変換する方法
2. 特定の手法でPyTorchのモジュールを、高性能なデプロイ環境で実行可能なTorchScripに変換する方法、とくに
- モジュールをトレースする方法(Tracing)
- コードを直接モジュールにするスクリプトの方法(scripting)
- 上記2つの方法の相互利用と同時利用の方法
- TorchScriptモジュールの保存とロード方法
本チュートリアルが終了した際には、ぜひこの次のチュートリアルである、「C++でのTorchScriptモデルのロード手法」(LOADING A TORCHSCRIPT MODEL IN C++)にも挑戦してみてください。
次のチュートリアルでは、C++からTorchScriptを実行する方法について解説しています。
###Code
%matplotlib inline
import torch # This is all you need to use both PyTorch and TorchScript!
print(torch.__version__)
###Output
1.7.0+cu101
###Markdown
PyTochモデルの基本構成---------------------------------簡単なモジュールを定義してみましょう。``Module``はPyTorchを構成する基本ユニットです。``Module``には以下の内容が含まれています。1. モジュールを呼び出すための準備となるコンストラクタ2. 一連の``Parameters`'と、サブ``Modules``。なおこれらのパラメータとサブモジュールは、モジュールが呼び出されてコンストラクタで初期化された後に、使用できるようになります。3. 順伝搬である``forward``関数。モジュールそのものが呼び出されたときに実行される関数です。まずは簡単な例を確認してみましょう。
###Code
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
def forward(self, x, h):
new_h = torch.tanh(x + h)
return new_h, new_h
my_cell = MyCell()
x = torch.rand(3, 4)
h = torch.rand(3, 4)
print(my_cell(x, h))
###Output
(tensor([[0.4920, 0.8299, 0.5254, 0.8509],
[0.8504, 0.8406, 0.9022, 0.6847],
[0.6422, 0.8253, 0.7027, 0.5935]]), tensor([[0.4920, 0.8299, 0.5254, 0.8509],
[0.8504, 0.8406, 0.9022, 0.6847],
[0.6422, 0.8253, 0.7027, 0.5935]]))
###Markdown
上記では1. ``torch.nn.Module``のサブクラスを作成しました。2. コンストラクタを定義しました。コンストラクタは特に多くの処理をしているわけではなく、``super``で親クラスのコンストラクタを実行しているだけです。3. 順伝搬関数``forward``を定義しました。これは2つのパラメータを入力として受け取り、2つの変数を出力します。本チュートリアルでは順伝搬関数の中身は重要ではありませんが、一種の[RNN](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)的な動作を想定し、RNNの一部を実装しました。そして、このモジュールをインスタンス化し、3x4の乱数行列である変数``x``と``y``を用意しました。そして、モジュールを``my_cell(x, h)``で呼び出して実行しています。このモジュールを呼び出して実行すると、関数``forward``が呼び出されて実行されることになります。続いて、さらに面白いことに挑戦してみましょう。
###Code
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
print(my_cell)
print(my_cell(x, h))
###Output
MyCell(
(linear): Linear(in_features=4, out_features=4, bias=True)
)
(tensor([[ 0.7103, 0.1837, 0.2306, 0.5142],
[ 0.8638, 0.4461, 0.6245, 0.6464],
[ 0.6585, -0.0320, 0.7657, 0.0201]], grad_fn=<TanhBackward>), tensor([[ 0.7103, 0.1837, 0.2306, 0.5142],
[ 0.8638, 0.4461, 0.6245, 0.6464],
[ 0.6585, -0.0320, 0.7657, 0.0201]], grad_fn=<TanhBackward>))
###Markdown
再度、モジュール``MyCell``を定義しなおしました。今度は、``self.linear``のメンバ変数を追加し、``forward``関数内でこの``self.linear``を呼び出しています。このとき、実際には何が起こっているのでしょうか? ``torch.nn.Linear``はPytorchの標準ライブラリで用意されている``Module``です、この ``MyCell``と同じようなものになります。
この``torch.nn.Linear``はモジュール内の関数で、forwardなどをそのまま呼び出すことができます。
結果、モジュールの階層的な構造を構成することができました。
print文でモジュールを出力すると、このモジュールのサブクラスの仮想的表現を確認することができます。
(print(my_cell)の部分) この例では、サブクラス``Linear``とそのパラメータを確認できています。
このように``Module``を複数使用して構成することで、再利用可能なコンポーネントをから成るモデルを簡潔かつ読みやすく実装することができます。
ここで上記セルの、出力結果にある``grad_fn``の記載が気になるかと思います。
``grad_fn``はPyTorchの自動微分、[`autograd`](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html)、から詳細をご覧ください。
簡単にはこの自動微分により、複雑なプログラムの偏微分を計算することができます。
PyTorchではこの自動微分がうまく設計され、組み込まれているために、モデル実装の柔軟性が大幅に向上しています。
続ていて、その柔軟性を確かめてみましょう。
###Code
class MyDecisionGate(torch.nn.Module):
def forward(self, x):
if x.sum() > 0:
return x
else:
return -x
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.dg = MyDecisionGate()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.dg(self.linear(x)) + h)
return new_h, new_h
my_cell = MyCell()
print(my_cell)
print(my_cell(x, h))
###Output
MyCell(
(dg): MyDecisionGate()
(linear): Linear(in_features=4, out_features=4, bias=True)
)
(tensor([[ 0.2030, 0.1072, -0.0427, 0.7238],
[ 0.2365, 0.5272, 0.3636, 0.8485],
[-0.0170, 0.3070, 0.7457, 0.4996]], grad_fn=<TanhBackward>), tensor([[ 0.2030, 0.1072, -0.0427, 0.7238],
[ 0.2365, 0.5272, 0.3636, 0.8485],
[-0.0170, 0.3070, 0.7457, 0.4996]], grad_fn=<TanhBackward>))
###Markdown
再度、``MyCell``クラスを定義しなおしました。今回は、一緒に``MyDecisionGate``クラスも定義しています。このモジュールは順伝搬時の流れを制御するために使用します(**control flow**)。多くの場合、順伝搬を制御するために、ループ構文やif文を含んでいます。多くのディープラーニング・フレームワークは完全なモデル表現を必要とします(日本語訳注:そのため**control flow**が使えません)。ですが、PyTorchの場合、勾配計算を連続した記録として扱えます。PyTorchでは実際に計算時に何が行われたのかを記録し、出力の偏微分を求める際にはその記録を逆戻しして計算します。そのためPyTorchでは、事前に完全にモデルの逆伝搬を定義しておく必要がありません。(日本語訳注:このようなPyTorchの特徴はDefine by Runと呼ばれます。一方で先に計算のフローを固定していまう手法は、Define and Runと呼ばれます)自動微分の働きについては以下のアニメーション図も参考にしてみてください。図. [自動微分の動作の様子](https://github.com/pytorch/pytorch/raw/master/docs/source/_static/img/dynamic_graph.gif) TorchScriptの基本---------------------それでは、サンプルの実行方法と、TorchScriptの使用方法を学んでいきましょう。一言で表すと、TorchScriptはPyTorchの柔軟で動的な性質を考慮しながら、モデルをキャプチャするツールです。まずは、トレース(tracing)による、モジュールのキャプチャを試しましょう。
###Code
class MyCell(torch.nn.Module):
def __init__(self):
super(MyCell, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.linear(x) + h)
return new_h, new_h
my_cell = MyCell()
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell = torch.jit.trace(my_cell, (x, h))
print(traced_cell)
traced_cell(x, h)
###Output
MyCell(
original_name=MyCell
(linear): Linear(original_name=Linear)
)
###Markdown
本チュートリアルの2番目に作成した``MyCell``クラスを再定義しています。このクラスの順伝搬を実行する前に、``torch.jit.trace``を使用し、モジュールへの入力の例を渡しています。(日本語訳注:分かりづらいのですが、traced_cell = torch.jit.trace(my_cell, (x, h))の部分です。my_cellで順伝搬させず、入力のサンプルとしてx,hと、そしてモデルmy_cellをtorch.jit.traceに渡して、出力traced_cellを得ています。)
ここで実際には何が行われたのでしょうか。
``torch.jit.trace``は``Module``クラスを引数に持っており、このモジュールの順伝搬を実行し、``TracedModule``のインスタンスである``torch.jit.ScriptModule``であるtraced_cellを作成しました。
TorchScriptは中間表現(IR:Intermediate Representation)を記録します。
この中間表現はディープラーニングの界隈ではよく、グラフ(graph)とも呼ばれます。
このグラフは、 ``.graph``プロパティを呼び出すことで確認することができます。
###Code
print(traced_cell.graph)
###Output
graph(%self.1 : __torch__.MyCell,
%input : Float(3:4, 4:1, requires_grad=0, device=cpu),
%h : Float(3:4, 4:1, requires_grad=0, device=cpu)):
%19 : __torch__.torch.nn.modules.linear.Linear = prim::GetAttr[name="linear"](%self.1)
%21 : Tensor = prim::CallMethod[name="forward"](%19, %input)
%12 : int = prim::Constant[value=1]() # <ipython-input-5-1f6e08af67d0>:7:0
%13 : Float(3:4, 4:1, requires_grad=1, device=cpu) = aten::add(%21, %h, %12) # <ipython-input-5-1f6e08af67d0>:7:0
%14 : Float(3:4, 4:1, requires_grad=1, device=cpu) = aten::tanh(%13) # <ipython-input-5-1f6e08af67d0>:7:0
%15 : (Float(3:4, 4:1, requires_grad=1, device=cpu), Float(3:4, 4:1, requires_grad=1, device=cpu)) = prim::TupleConstruct(%14, %14)
return (%15)
###Markdown
しかしながら、中間表現は非常に低レベルな表現手法(人間には分かりづらく、計算器に扱いやすいレベルの表現手法)です。そのため、グラフ情報の大半はユーザーにとって有用なものではありません。その代わり、``.code``プロパティを使用することで、TorchScriptのコード(グラフ)を解釈しやすいように出力することが可能です。
###Code
print(traced_cell.code)
###Output
def forward(self,
input: Tensor,
h: Tensor) -> Tuple[Tensor, Tensor]:
_0 = torch.add((self.linear).forward(input, ), h, alpha=1)
_1 = torch.tanh(_0)
return (_1, _1)
###Markdown
ここまでサンプルを実行しましたが、なんの目的があって、このような操作を行っているのでしょうか?それにはいくつかの理由(利点)はいくつか存在します。1. TorchScriptのコードはPythonのインタプリタではない、独自のインタプリタで呼び出されます。この独自のインタプリタは、Pythonインタプリタとは異なり、Global Interpreter Lockにひっかからないので、同じインスタンスを、同時に複数のリクエストで処理することができます。2. TorchScriptのフォーマットはモデルの全情報を保存し、異なる環境、例えばPython以外の言語で書かれたサーバーでもロードが可能です。3. TorchScriptはコンパイラに最適化されており、実行が、より効率的になります。4. TorchScriptは様々なバックエンドやデバイス環境で実行可能であり、個別の実行演算を持つ各種プログラムにモデルを変換するよりも適用範囲が広いです。 ``traced_cell``を呼び出すことでも、元のモジュールのPythonインスタンス``my_cell``を呼び出した結果と同じ内容を得ることができます。
###Code
print(my_cell(x, h))
print(traced_cell(x, h))
###Output
(tensor([[ 0.6958, 0.1292, 0.0430, 0.1840],
[ 0.7851, 0.2323, -0.2598, 0.2947],
[ 0.5422, -0.1874, -0.4077, -0.0105]], grad_fn=<TanhBackward>), tensor([[ 0.6958, 0.1292, 0.0430, 0.1840],
[ 0.7851, 0.2323, -0.2598, 0.2947],
[ 0.5422, -0.1874, -0.4077, -0.0105]], grad_fn=<TanhBackward>))
(tensor([[ 0.6958, 0.1292, 0.0430, 0.1840],
[ 0.7851, 0.2323, -0.2598, 0.2947],
[ 0.5422, -0.1874, -0.4077, -0.0105]],
grad_fn=<DifferentiableGraphBackward>), tensor([[ 0.6958, 0.1292, 0.0430, 0.1840],
[ 0.7851, 0.2323, -0.2598, 0.2947],
[ 0.5422, -0.1874, -0.4077, -0.0105]],
grad_fn=<DifferentiableGraphBackward>))
###Markdown
TorchScriptを利用したモジュールの変換----------------------------------先ほどの実装コードで、の2番目の実装例`MyCell`を使用し、3番目のcontrol-flow-ladenサブモジュール(control flowモジュールを含むもの)を使わなかったのには理由があります。その理由を確認してみましょう。
###Code
class MyDecisionGate(torch.nn.Module):
def forward(self, x):
if x.sum() > 0:
return x
else:
return -x
class MyCell(torch.nn.Module):
def __init__(self, dg):
super(MyCell, self).__init__()
self.dg = dg
self.linear = torch.nn.Linear(4, 4)
def forward(self, x, h):
new_h = torch.tanh(self.dg(self.linear(x)) + h)
return new_h, new_h
my_cell = MyCell(MyDecisionGate())
traced_cell = torch.jit.trace(my_cell, (x, h))
print(traced_cell.code)
###Output
def forward(self,
input: Tensor,
h: Tensor) -> Tuple[Tensor, Tensor]:
_0 = self.dg
_1 = (self.linear).forward(input, )
_2 = (_0).forward(_1, )
_3 = torch.tanh(torch.add(_1, h, alpha=1))
return (_3, _3)
###Markdown
``.code``の出力をご覧ください。if-else文が見当たりません。なぜでしょうか。トレースは実際にコードが実行された通りに、その操作を記録し、TorchScriptのScriptModuleを生成します。そのため残念ながら、制御部分(control flow)は考慮できません。それではTorchScriptにcotrol flowを盛り込むにはどうすれば良いのでしょうか。 ここで、PythonのソースコードをTorchScriptへと解析、変換する**script compiler**(スクリプト・コンパイラ)を使用します。
``MyDecisionGate``クラスを変換してみましょう。
###Code
scripted_gate = torch.jit.script(MyDecisionGate())
my_cell = MyCell(scripted_gate)
traced_cell = torch.jit.script(my_cell)
print(traced_cell.code)
###Output
def forward(self,
x: Tensor,
h: Tensor) -> Tuple[Tensor, Tensor]:
_0 = (self.dg).forward((self.linear).forward(x, ), )
new_h = torch.tanh(torch.add(_0, h, alpha=1))
return (new_h, new_h)
###Markdown
(日本語訳注)上記では(self.dg)という部分が出力に含まれており、それがcontrol flowになっています。ですが、分かりづらいので、一応、TorchScrpitになったscripted_gateを確認しておきましょう。
###Code
# 日本語版追記 TorchScriptにif文が入っていることが分かります
print(scripted_gate.code)
###Output
def forward(self,
x: Tensor) -> Tensor:
_0 = bool(torch.gt(torch.sum(x, dtype=None), 0))
if _0:
_1 = x
else:
_1 = torch.neg(x)
return _1
###Markdown
これで、うまくいきました。以上により、TorchScriptにcontrol flowを組み込めたので、実際に実行してみます。
###Code
# New inputs
x, h = torch.rand(3, 4), torch.rand(3, 4)
traced_cell(x, h)
###Output
_____no_output_____
###Markdown
**ScrptingとTracingを同時に使用する方法**ときに、PyTorchのモジュールが複雑であり、control flowが多様で、トレース(tracing)ではなくスクリプト(scripting)で変換したい場合があります。このような場合には、``torch.jit.script`をインラインで使用することで、モジュールのトレーシングと同時に使用することができます。最初の例を見てみましょう。(``torch.jit.trace``がインラインで使われた後、scriptしています)
###Code
class MyRNNLoop(torch.nn.Module):
def __init__(self):
super(MyRNNLoop, self).__init__()
self.cell = torch.jit.trace(MyCell(scripted_gate), (x, h))
def forward(self, xs):
h, y = torch.zeros(3, 4), torch.zeros(3, 4)
for i in range(xs.size(0)):
y, h = self.cell(xs[i], h)
return y, h
rnn_loop = torch.jit.script(MyRNNLoop())
print(rnn_loop.code)
###Output
def forward(self,
xs: Tensor) -> Tuple[Tensor, Tensor]:
h = torch.zeros([3, 4], dtype=None, layout=None, device=None, pin_memory=None)
y = torch.zeros([3, 4], dtype=None, layout=None, device=None, pin_memory=None)
y0 = y
h0 = h
for i in range(torch.size(xs, 0)):
_0 = (self.cell).forward(torch.select(xs, 0, i), h0, )
y1, h1, = _0
y0, h0 = y1, h1
return (y0, h0)
###Markdown
次の例を確認してみましょう。(上記コードのクラスMyRNNLoopを``torch.jit.script``のインラインで使用)
###Code
class WrapRNN(torch.nn.Module):
def __init__(self):
super(WrapRNN, self).__init__()
self.loop = torch.jit.script(MyRNNLoop())
def forward(self, xs):
y, h = self.loop(xs)
return torch.relu(y)
traced = torch.jit.trace(WrapRNN(), (torch.rand(10, 3, 4)))
print(traced.code)
###Output
def forward(self,
argument_1: Tensor) -> Tensor:
_0, h, = (self.loop).forward(argument_1, )
return torch.relu(h)
###Markdown
このように、スクリプト(scripting)とトレース(tracing)は、どちらかからどちらかのTorchScriptを呼び出したり、同時に使用したりできます。 TorchScriptでのモデルの保存とロード-------------------------最後に、TorchScriptのモジュールを、ディスクにアーカイブフォーマット(archive format)で保存、ロードする関数(API)を解説します。このフォーマットには、コード、パラメータ、属性、およびデバッグ情報が含まれており、アーカイブフォーマットは、完全に別のプロセスでロードできる独立した表現手法です。さきほどのRNNモデルを保存、ロードしてみましょう。
###Code
traced.save('wrapped_rnn.zip')
loaded = torch.jit.load('wrapped_rnn.zip')
print(loaded)
print(loaded.code)
###Output
RecursiveScriptModule(
original_name=WrapRNN
(loop): RecursiveScriptModule(
original_name=MyRNNLoop
(cell): RecursiveScriptModule(
original_name=MyCell
(dg): RecursiveScriptModule(original_name=MyDecisionGate)
(linear): RecursiveScriptModule(original_name=Linear)
)
)
)
def forward(self,
argument_1: Tensor) -> Tensor:
_0, h, = (self.loop).forward(argument_1, )
return torch.relu(h)
|
nbs/00_test.ipynb | ###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` checks things are equal and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df3 = pd.DataFrame(dict(a=[1,2],b=['a','c']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
test_fail(lambda: test_eq(df1,df3), contains='==')
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return is_close(np.array(a), np.array(b), eps=eps)
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(np.frombuffer(ax.figure.canvas.tostring_argb(), dtype=np.uint8))
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_core.foundation.ipynb.
Converted 01a_core.utils.ipynb.
Converted 01b_core.dispatch.ipynb.
Converted 01c_core.transform.ipynb.
Converted 02_core.script.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return is_close(np.array(a), np.array(b), eps=eps)
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(np.frombuffer(ax.figure.canvas.tostring_argb(), dtype=np.uint8))
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
def test_sig(f, b):
"Test the signature of an object"
test_eq(str(inspect.signature(f)), b)
def func_1(h,i,j): pass
def func_2(h,i=3, j=[5,6]): pass
class T:
def __init__(self, a, b): pass
test_sig(func_1, '(h, i, j)')
test_sig(func_2, '(h, i=3, j=[5, 6])')
test_sig(T, '(a, b)')
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_foundation.ipynb.
Converted 01_foundation_experiment.ipynb.
Converted 02_utils.ipynb.
Converted 03_dispatch.ipynb.
Converted 04_transform.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df3 = pd.DataFrame(dict(a=[1,2],b=['a','c']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
test_fail(lambda: test_eq(df1,df3), contains='==')
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains='', args=None, kwargs=None):
args, kwargs = args or [], kwargs or {}
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f(*args, **kwargs)
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
###Output
_____no_output_____
###Markdown
We can also pass `args` and `kwargs` to function to check if it fails with special inputs.
###Code
def _fail_args(a):
if a == 5:
raise ValueError
test_fail(_fail_args, args=(5,))
test_fail(_fail_args, kwargs=dict(a=5))
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` checks things are equal and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df3 = pd.DataFrame(dict(a=[1,2],b=['a','c']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
test_fail(lambda: test_eq(df1,df3), contains='==')
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_foundation.ipynb.
Converted 02_utils.ipynb.
Converted 03_dispatch.ipynb.
Converted 04_transform.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple funciton `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return is_close(np.array(a), np.array(b), eps=eps)
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(np.frombuffer(ax.figure.canvas.tostring_argb(), dtype=np.uint8))
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_core_foundation.ipynb.
Converted 01a_core_utils.ipynb.
Converted 01b_core_dispatch.ipynb.
Converted 01c_core_transform.ipynb.
Converted 02_core_script.ipynb.
Converted 03_torchcore.ipynb.
Converted 03a_layers.ipynb.
Converted 04_data_load.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_data_block.ipynb.
Converted 08_vision_core.ipynb.
Converted 09_vision_augment.ipynb.
Converted 09a_vision_data.ipynb.
Converted 09b_vision_utils.ipynb.
Converted 10_pets_tutorial.ipynb.
Converted 11_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 15a_vision_models_unet.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision_learner.ipynb.
Converted 22_tutorial_imagenette.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block_examples.ipynb.
Converted 60_medical_imaging.ipynb.
Converted 65_medical_text.ipynb.
Converted 70_callback_wandb.ipynb.
Converted 71_callback_tensorboard.ipynb.
Converted 90_xse_resnext.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains='', args=None, kwargs=None):
args, kwargs = args or [], kwargs or {}
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f(*args, **kwargs)
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
###Output
_____no_output_____
###Markdown
We can also pass `args` and `kwargs` to function to check if it fails with special inputs.
###Code
def _fail_args(a):
if a == 5:
raise ValueError
test_fail(_fail_args, args=(5,))
test_fail(_fail_args, kwargs=dict(a=5))
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` checks things are equal and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df3 = pd.DataFrame(dict(a=[1,2],b=['a','c']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
test_fail(lambda: test_eq(df1,df3), contains='==')
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 01_foundation.ipynb.
Converted 02_utils-Copy1.ipynb.
Converted 02_utils.ipynb.
Converted 03_dispatch.ipynb.
Converted 04_transform.ipynb.
Converted 05_logargs.ipynb.
Converted 06_meta.ipynb.
Converted 07_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator. Tendo: `test_fail` is used to ensure that a test fails with a message in the `msg` arg or the arg `contains` is contained in the exception the failing test(`test_fail`) returns
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f() # run the func
except Exception as e: # if an Exception from f()
assert not contains or contains in str(e) #check if the Exception contains `contains`
return
assert False,f"Expected exception but none raised. {msg}" # if it doesn't raise an 2 Exceptions
assert False
def _fail(): raise Exception("foobar")
# def _fail(): pass # this removes the exception from the f() which should lead to a test fail
test_fail(_fail, contains="foo")
# test_fail(_fail, msg='Hello there', contains="foo") #run this to fail the f() returns an Exception Test
def _fail(): raise Exception()
test_fail(_fail) # no exception text is raised from the func
###Output
_____no_output_____
###Markdown
Tendo: Test to see if a condition `cmp` with a name `cname` applies on a and b
###Code
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__ #get the name of the test being checked
assert cmp(a,b),f"{cname}:\n{a}\n{b}" # if the check fails raise an Assertion exception in the form shown
# run to show example
operator.eq([1,2],[1,2]), operator.eq.__name__
# run to show example
# x = lambda: test([1,2],[1], operator.eq)
# x()
test([1,2],[1,2], operator.eq) # check for equality
test_fail(lambda: test([1,2],[1], operator.eq)) #the test is run as `f()` in `test_fail`
test([1,2],[1], operator.ne) #check for non-equality
test_fail(lambda: test([1,2],[1,2], operator.ne))
###Output
_____no_output_____
###Markdown
`show_doc` is from project `nbdev`. It can be used to see the documentation of any python object
###Code
show_doc(all_equal)
all_equal(['abc'], ['abc'])
test(['abc'], ['abc'], all_equal) #tests to make sure all the items in both lists are equal
operator.ne([1,2],[4,4])dd
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`: `test_eq` is the same as `==` and `test_ne` is the same as `!=`
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==') #note the use of the equality sign as our cmp this time. `equals` is a method from python
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq([1,2],map(float,[1,2])) #added by tendo
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==") # the test fails because the args to `test_eq` are different
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'}) # the equality test passes even when the order is changed
#hide
import pandas as pd
# the equality test even work for dataframes!
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
df1.iloc[0]
(df2.iloc[0])
###Output
_____no_output_____
###Markdown
In order to test that two args are equal and of the same type, we use `test_eq_type`
###Code
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b)) #make sure the types match
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b)) #if the input arg is a collection map the type over and check for match
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.)) # int and float
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=') #`nequals` is a method from python
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
###Output
_____no_output_____
###Markdown
Inorder to check if two args `a` and `b` are close to each other with a distance of +- `eps`, we use the `is_close`. This is extended to work in `test_close` test
###Code
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'): #if input is ndarray
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)): # if input is collection
return is_close(np.array(a), np.array(b), eps=eps) #convert to collection to array and continue
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close') # the error message if test fails will be 'close'
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
operator.is_([1], [1]) #two lists are actually different objects in memory so they are not the same
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
###Output
_____no_output_____
###Markdown
You have twi identical lists with the exact same items but different order. Use `test_shuffled` to know if they are shuffled twins
###Code
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b) # first make sure the two args are not the same
test_eq(Counter(a), Counter(b)) # count each element in both lists in order to be sure they match
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b)) # the args must be identical list but shuffed
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
###Output
_____no_output_____
###Markdown
use the `test_stdout` to ensure that the output gotten from the function call is what you expect
###Code
io.StringIO().getvalue()
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(np.frombuffer(ax.figure.canvas.tostring_argb(), dtype=np.uint8))
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#hide
from nbdev.export import notebook2script
# notebook2script()
###Output
_____no_output_____
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return is_close(np.array(a), np.array(b), eps=eps)
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(np.frombuffer(ax.figure.canvas.tostring_argb(), dtype=np.uint8))
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_core.foundation.ipynb.
Converted 01a_core.utils.ipynb.
Converted 01b_core.dispatch.ipynb.
Converted 01c_core.transform.ipynb.
Converted 02_core.script.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 01_foundation.ipynb.
Converted 02_utils-Copy1.ipynb.
Converted 02_utils.ipynb.
Converted 03_dispatch.ipynb.
Converted 04_transform.ipynb.
Converted 05_logargs.ipynb.
Converted 06_meta.ipynb.
Converted 07_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df3 = pd.DataFrame(dict(a=[1,2],b=['a','c']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
test_fail(lambda: test_eq(df1,df3), contains='==')
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains='', args=None, kwargs=None):
args, kwargs = args or [], kwargs or {}
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f(*args, **kwargs)
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
###Output
_____no_output_____
###Markdown
We can also pass `args` and `kwargs` to function to check if it fails with special inputs.
###Code
def _fail_args(a):
if a == 5:
raise ValueError
test_fail(_fail_args, args=(5,))
test_fail(_fail_args, kwargs=dict(a=5))
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
import torch
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df3 = pd.DataFrame(dict(a=[1,2],b=['a','c']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
test_fail(lambda: test_eq(df1,df3), contains='==')
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
test_eq(torch.zeros(10), torch.zeros(10, dtype=torch.float64))
test_eq(torch.zeros(10), torch.ones(10)-1)
test_fail(lambda:test_eq(torch.zeros(10), torch.ones(1, 10)), contains='==')
test_eq(torch.zeros(3), [0,0,0])
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return all(abs(a_-b_)<eps for a_,b_ in zip(a,b))
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(ax.figure.canvas.tostring_argb())
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
class ExceptionExpected:
"Context manager that tests if an exception is raised"
def __init__(self, ex=Exception, regex=''): self.ex,self.regex = ex,regex
def __enter__(self): pass
def __exit__(self, type, value, traceback):
if not isinstance(value, self.ex) or (self.regex and not re.search(self.regex, f'{value.args}')):
raise TypeError(f"Expected {self.ex.__name__}({self.regex}) not raised.")
return True
def _tst_1(): assert False, "This is a test"
def _tst_2(): raise SyntaxError
with ExceptionExpected(): _tst_1()
with ExceptionExpected(ex=AssertionError, regex="This is a test"): _tst_1()
with ExceptionExpected(ex=SyntaxError): _tst_2()
###Output
_____no_output_____
###Markdown
`exception` is an abbreviation for `ExceptionExpected()`.
###Code
#export
exception = ExceptionExpected()
with exception: _tst_1()
#hide
def _f():
with ExceptionExpected(): 1
test_fail(partial(_f))
def _f():
with ExceptionExpected(SyntaxError): assert False
test_fail(partial(_f))
def _f():
with ExceptionExpected(AssertionError, "Yes"): assert False, "No"
test_fail(partial(_f))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_basics.ipynb.
Converted 02_foundation.ipynb.
Converted 03_xtras.ipynb.
Converted 03a_parallel.ipynb.
Converted 03b_net.ipynb.
Converted 04_dispatch.ipynb.
Converted 05_transform.ipynb.
Converted 07_meta.ipynb.
Converted 08_script.ipynb.
Converted index.ipynb.
###Markdown
Test> Helper functions to quickly write tests in notebooks Simple test functions We can check that code raises an exception when that's expected (`test_fail`). To test for equality or inequality (with different types of things) we define a simple function `test` that compares two object with a given `cmp` operator.
###Code
#export
def test_fail(f, msg='', contains=''):
"Fails with `msg` unless `f()` raises an exception and (optionally) has `contains` in `e.args`"
try: f()
except Exception as e:
assert not contains or contains in str(e)
return
assert False,f"Expected exception but none raised. {msg}"
def _fail(): raise Exception("foobar")
test_fail(_fail, contains="foo")
def _fail(): raise Exception()
test_fail(_fail)
#export
def test(a, b, cmp,cname=None):
"`assert` that `cmp(a,b)`; display inputs and `cname or cmp.__name__` if it fails"
if cname is None: cname=cmp.__name__
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
test([1,2],[1,2], operator.eq)
test_fail(lambda: test([1,2],[1], operator.eq))
test([1,2],[1], operator.ne)
test_fail(lambda: test([1,2],[1,2], operator.ne))
show_doc(all_equal)
test(['abc'], ['abc'], all_equal)
show_doc(equals)
test([['abc'],['a']], [['abc'],['a']], equals)
#export
def nequals(a,b):
"Compares `a` and `b` for `not equals`"
return not equals(a,b)
test(['abc'], ['ab' ], nequals)
###Output
_____no_output_____
###Markdown
test_eq test_ne, etc... Just use `test_eq`/`test_ne` to test for `==`/`!=`. `test_eq_type` check things are equals and of the same type. We define them using `test`:
###Code
#export
def test_eq(a,b):
"`test` that `a==b`"
test(a,b,equals, '==')
test_eq([1,2],[1,2])
test_eq([1,2],map(int,[1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq(array([1,2]),array([1,2]))
test_eq([array([1,2]),3],[array([1,2]),3])
test_eq(dict(a=1,b=2), dict(b=2,a=1))
test_fail(lambda: test_eq([1,2], 1), contains="==")
test_fail(lambda: test_eq(None, np.array([1,2])), contains="==")
test_eq({'a', 'b', 'c'}, {'c', 'a', 'b'})
#hide
import pandas as pd
df1 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
df2 = pd.DataFrame(dict(a=[1,2],b=['a','b']))
test_eq(df1,df2)
test_eq(df1.a,df2.a)
class T(pd.Series): pass
test_eq(df1.iloc[0], T(df2.iloc[0]))
#export
def test_eq_type(a,b):
"`test` that `a==b` and are same type"
test_eq(a,b)
test_eq(type(a),type(b))
if isinstance(a,(list,tuple)): test_eq(map(type,a),map(type,b))
test_eq_type(1,1)
test_fail(lambda: test_eq_type(1,1.))
test_eq_type([1,1],[1,1])
test_fail(lambda: test_eq_type([1,1],(1,1)))
test_fail(lambda: test_eq_type([1,1],[1,1.]))
#export
def test_ne(a,b):
"`test` that `a!=b`"
test(a,b,nequals,'!=')
test_ne([1,2],[1])
test_ne([1,2],[1,3])
test_ne(array([1,2]),array([1,1]))
test_ne(array([1,2]),array([1,1]))
test_ne([array([1,2]),3],[array([1,2])])
test_ne([3,4],array([3]))
test_ne([3,4],array([3,5]))
test_ne(dict(a=1,b=2), ['a', 'b'])
test_ne(['a', 'b'], dict(a=1,b=2))
#export
def is_close(a,b,eps=1e-5):
"Is `a` within `eps` of `b`"
if hasattr(a, '__array__') or hasattr(b,'__array__'):
return (abs(a-b)<eps).all()
if isinstance(a, (Iterable,Generator)) or isinstance(b, (Iterable,Generator)):
return is_close(np.array(a), np.array(b), eps=eps)
return abs(a-b)<eps
#export
def test_close(a,b,eps=1e-5):
"`test` that `a` is within `eps` of `b`"
test(a,b,partial(is_close,eps=eps),'close')
test_close(1,1.001,eps=1e-2)
test_fail(lambda: test_close(1,1.001))
test_close([-0.001,1.001], [0.,1.], eps=1e-2)
test_close(np.array([-0.001,1.001]), np.array([0.,1.]), eps=1e-2)
test_close(array([-0.001,1.001]), array([0.,1.]), eps=1e-2)
#export
def test_is(a,b):
"`test` that `a is b`"
test(a,b,operator.is_, 'is')
test_fail(lambda: test_is([1], [1]))
a = [1]
test_is(a, a)
#export
def test_shuffled(a,b):
"`test` that `a` and `b` are shuffled versions of the same sequence of items"
test_ne(a, b)
test_eq(Counter(a), Counter(b))
a = list(range(50))
b = copy(a)
random.shuffle(b)
test_shuffled(a,b)
test_fail(lambda:test_shuffled(a,a))
a = 'abc'
b = 'abcabc'
test_fail(lambda:test_shuffled(a,b))
a = ['a', 42, True]
b = [42, True, 'a']
test_shuffled(a,b)
#export
def test_stdout(f, exp, regex=False):
"Test that `f` prints `exp` to stdout, optionally checking as `regex`"
s = io.StringIO()
with redirect_stdout(s): f()
if regex: assert re.search(exp, s.getvalue()) is not None
else: test_eq(s.getvalue(), f'{exp}\n' if len(exp) > 0 else '')
test_stdout(lambda: print('hi'), 'hi')
test_fail(lambda: test_stdout(lambda: print('hi'), 'ho'))
test_stdout(lambda: 1+1, '')
test_stdout(lambda: print('hi there!'), r'^hi.*!$', regex=True)
#export
def test_warns(f, show=False):
with warnings.catch_warnings(record=True) as w:
f()
test_ne(len(w), 0)
if show:
for e in w: print(f"{e.category}: {e.message}")
test_warns(lambda: warnings.warn("Oh no!"), {})
test_fail(lambda: test_warns(lambda: 2+2))
test_warns(lambda: warnings.warn("Oh no!"), show=True)
#export
TEST_IMAGE = 'images/puppy.jpg'
im = Image.open(TEST_IMAGE).resize((128,128)); im
#export
TEST_IMAGE_BW = 'images/mnist3.png'
im = Image.open(TEST_IMAGE_BW).resize((128,128)); im
#export
def test_fig_exists(ax):
"Test there is a figure displayed in `ax`"
assert ax and len(np.frombuffer(ax.figure.canvas.tostring_argb(), dtype=np.uint8))
fig,ax = plt.subplots()
ax.imshow(array(im));
test_fig_exists(ax)
#export
def test_sig(f, b):
"Test the signature of an object"
test_eq(str(inspect.signature(f)), b)
def func_1(h,i,j): pass
def func_2(h,i=3, j=[5,6]): pass
class T:
def __init__(self, a, b): pass
test_sig(func_1, '(h, i, j)')
test_sig(func_2, '(h, i=3, j=[5, 6])')
test_sig(T, '(a, b)')
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_test.ipynb.
Converted 01_foundation.ipynb.
Converted 02_utils.ipynb.
Converted 03_dispatch.ipynb.
Converted 04_transform.ipynb.
Converted index.ipynb.
|
templates/pull_request_analysis_template.ipynb | ###Markdown
Pull Request Analysis Visualization Limitations for Reporting on Several ReposThe visualizations in this notebook are, like most, able to coherently display information for between 1 and 8 different repositories simultaneously. Alternatives for Reporting on Repo Groups, Comprising Many ReposThe included queries could be rewritten to show an entire repository group's characteristics of that is your primary aim. Specifically, any query could replace this line: ``` WHERE repo.repo_id = {repo_id}```with this line to accomplish the goal of comparing different groups of repositories: ``` WHERE repogroups.repo_group_id = {repo_id}```Simply replace the set of id's in the **Pull Request Filter** section with a list of repo_group_id numbers as well, to accomplish this view. ------------
###Code
import psycopg2
import pandas as pd
import sqlalchemy as salc
import matplotlib
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
import datetime
import json
warnings.filterwarnings('ignore')
with open("config.json") as config_file:
config = json.load(config_file)
database_connection_string = 'postgres+psycopg2://{}:{}@{}:{}/{}'.format(config['user'], config['password'], config['host'], config['port'], config['database'])
dbschema='augur_data'
engine = salc.create_engine(
database_connection_string,
connect_args={'options': '-csearch_path={}'.format(dbschema)})
###Output
_____no_output_____
###Markdown
Control Cell
###Code
#declare all repo ids you would like to produce charts for
repo_set = {25440, 25448}
#can be set as 'competitors' or 'repo'
#'competitors' will group graphs by type, so it is easy to compare across repos
# 'repo' will group graphs by repo so it is easy to look at all the contributor data for each repo
display_grouping = 'repo'
#if display_grouping is set to 'competitors', enter the repo ids you do no want to alias, if 'display_grouping' is set to repo the list will not effect anything
not_aliased_repos = [25440, 25448]
begin_date = '2019-10-01'
end_date = '2020-10-31'
#specify number of outliers for removal in scatter plot
scatter_plot_outliers_removed = 5
save_files = False
###Output
_____no_output_____
###Markdown
Identifying the Longest Running Pull Requests Getting the Data
###Code
pr_all = pd.DataFrame()
for repo_id in repo_set:
pr_query = salc.sql.text(f"""
SELECT
repo.repo_id AS repo_id,
pull_requests.pr_src_id AS pr_src_id,
repo.repo_name AS repo_name,
pr_src_author_association,
repo_groups.rg_name AS repo_group,
pull_requests.pr_src_state,
pull_requests.pr_merged_at,
pull_requests.pr_created_at AS pr_created_at,
pull_requests.pr_closed_at AS pr_closed_at,
date_part( 'year', pr_created_at :: DATE ) AS CREATED_YEAR,
date_part( 'month', pr_created_at :: DATE ) AS CREATED_MONTH,
date_part( 'year', pr_closed_at :: DATE ) AS CLOSED_YEAR,
date_part( 'month', pr_closed_at :: DATE ) AS CLOSED_MONTH,
pr_src_meta_label,
pr_head_or_base,
( EXTRACT ( EPOCH FROM pull_requests.pr_closed_at ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_close,
( EXTRACT ( EPOCH FROM pull_requests.pr_closed_at ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_close,
( EXTRACT ( EPOCH FROM first_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_first_response,
( EXTRACT ( EPOCH FROM first_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_first_response,
( EXTRACT ( EPOCH FROM last_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_last_response,
( EXTRACT ( EPOCH FROM last_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_last_response,
first_response_time,
last_response_time,
average_time_between_responses,
assigned_count,
review_requested_count,
labeled_count,
subscribed_count,
mentioned_count,
referenced_count,
closed_count,
head_ref_force_pushed_count,
merged_count,
milestoned_count,
unlabeled_count,
head_ref_deleted_count,
comment_count,
lines_added,
lines_removed,
commit_count,
file_count
FROM
repo,
repo_groups,
pull_requests LEFT OUTER JOIN (
SELECT pull_requests.pull_request_id,
count(*) FILTER (WHERE action = 'assigned') AS assigned_count,
count(*) FILTER (WHERE action = 'review_requested') AS review_requested_count,
count(*) FILTER (WHERE action = 'labeled') AS labeled_count,
count(*) FILTER (WHERE action = 'unlabeled') AS unlabeled_count,
count(*) FILTER (WHERE action = 'subscribed') AS subscribed_count,
count(*) FILTER (WHERE action = 'mentioned') AS mentioned_count,
count(*) FILTER (WHERE action = 'referenced') AS referenced_count,
count(*) FILTER (WHERE action = 'closed') AS closed_count,
count(*) FILTER (WHERE action = 'head_ref_force_pushed') AS head_ref_force_pushed_count,
count(*) FILTER (WHERE action = 'head_ref_deleted') AS head_ref_deleted_count,
count(*) FILTER (WHERE action = 'milestoned') AS milestoned_count,
count(*) FILTER (WHERE action = 'merged') AS merged_count,
MIN(message.msg_timestamp) AS first_response_time,
COUNT(DISTINCT message.msg_timestamp) AS comment_count,
MAX(message.msg_timestamp) AS last_response_time,
(MAX(message.msg_timestamp) - MIN(message.msg_timestamp)) / COUNT(DISTINCT message.msg_timestamp) AS average_time_between_responses
FROM pull_request_events, pull_requests, repo, pull_request_message_ref, message
WHERE repo.repo_id = {repo_id}
AND repo.repo_id = pull_requests.repo_id
AND pull_requests.pull_request_id = pull_request_events.pull_request_id
AND pull_requests.pull_request_id = pull_request_message_ref.pull_request_id
AND pull_request_message_ref.msg_id = message.msg_id
GROUP BY pull_requests.pull_request_id
) response_times
ON pull_requests.pull_request_id = response_times.pull_request_id
LEFT OUTER JOIN (
SELECT pull_request_commits.pull_request_id, count(DISTINCT pr_cmt_sha) AS commit_count FROM pull_request_commits, pull_requests, pull_request_meta
WHERE pull_requests.pull_request_id = pull_request_commits.pull_request_id
AND pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND pr_cmt_sha <> pull_requests.pr_merge_commit_sha
AND pr_cmt_sha <> pull_request_meta.pr_sha
GROUP BY pull_request_commits.pull_request_id
) all_commit_counts
ON pull_requests.pull_request_id = all_commit_counts.pull_request_id
LEFT OUTER JOIN (
SELECT MAX(pr_repo_meta_id), pull_request_meta.pull_request_id, pr_head_or_base, pr_src_meta_label
FROM pull_requests, pull_request_meta
WHERE pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND pr_head_or_base = 'base'
GROUP BY pull_request_meta.pull_request_id, pr_head_or_base, pr_src_meta_label
) base_labels
ON base_labels.pull_request_id = all_commit_counts.pull_request_id
LEFT OUTER JOIN (
SELECT sum(cmt_added) AS lines_added, sum(cmt_removed) AS lines_removed, pull_request_commits.pull_request_id, count(DISTINCT cmt_filename) AS file_count
FROM pull_request_commits, commits, pull_requests, pull_request_meta
WHERE cmt_commit_hash = pr_cmt_sha
AND pull_requests.pull_request_id = pull_request_commits.pull_request_id
AND pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND commits.repo_id = pull_requests.repo_id
AND commits.cmt_commit_hash <> pull_requests.pr_merge_commit_sha
AND commits.cmt_commit_hash <> pull_request_meta.pr_sha
GROUP BY pull_request_commits.pull_request_id
) master_merged_counts
ON base_labels.pull_request_id = master_merged_counts.pull_request_id
WHERE
repo.repo_group_id = repo_groups.repo_group_id
AND repo.repo_id = pull_requests.repo_id
AND repo.repo_id = {repo_id}
ORDER BY
merged_count DESC
""")
pr_a = pd.read_sql(pr_query, con=engine)
if not pr_all.empty:
pr_all = pd.concat([pr_all, pr_a])
else:
# first repo
pr_all = pr_a
display(pr_all.head())
pr_all.dtypes
###Output
_____no_output_____
###Markdown
Begin data pre-processing and adding columns Data type changing
###Code
# change count columns from float datatype to integer
pr_all[['assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count',
'comment_count',
'commit_count',
'file_count',
'lines_added',
'lines_removed'
]] = pr_all[['assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count',
'comment_count',
'commit_count',
'file_count',
'lines_added',
'lines_removed'
]].astype(float)
# Change years to int so that doesn't display as 2019.0 for example
pr_all[[
'created_year',
'closed_year']] = pr_all[['created_year',
'closed_year']].fillna(-1).astype(int).astype(str)
pr_all.dtypes
print(pr_all['repo_name'].unique())
###Output
_____no_output_____
###Markdown
Add `average_days_between_responses` and `average_hours_between_responses` columns
###Code
# Get days for average_time_between_responses time delta
pr_all['average_days_between_responses'] = pr_all['average_time_between_responses'].map(lambda x: x.days).astype(float)
pr_all['average_hours_between_responses'] = pr_all['average_time_between_responses'].map(lambda x: x.days * 24).astype(float)
pr_all.head()
###Output
_____no_output_____
###Markdown
Date filtering entire dataframe
###Code
start_date = pd.to_datetime(begin_date)
# end_date = pd.to_datetime('2020-02-01 09:00:00')
end_date = pd.to_datetime(end_date)
pr_all = pr_all[(pr_all['pr_created_at'] > start_date) & (pr_all['pr_closed_at'] < end_date)]
pr_all['created_year'] = pr_all['created_year'].map(int)
pr_all['created_month'] = pr_all['created_month'].map(int)
pr_all['created_month'] = pr_all['created_month'].map(lambda x: '{0:0>2}'.format(x))
pr_all['created_yearmonth'] = pd.to_datetime(pr_all['created_year'].map(str) + '-' + pr_all['created_month'].map(str) + '-01')
pr_all.head(1)
###Output
_____no_output_____
###Markdown
add `days_to_close` column for pull requests that are still open (closed pull requests already have this column filled from the query)Note: there will be no pull requests that are still open in the dataframe if you filtered by an end date in the above cell
###Code
import datetime
# getting the number of days of (today - created at) for the PRs that are still open
# and putting this in the days_to_close column
# get timedeltas of creation time to todays date/time
days_to_close_open_pr = datetime.datetime.now() - pr_all.loc[pr_all['pr_src_state'] == 'open']['pr_created_at']
# get num days from above timedelta
days_to_close_open_pr = days_to_close_open_pr.apply(lambda x: x.days).astype(int)
# for only OPEN pr's, set the days_to_close column equal to above dataframe
pr_all.loc[pr_all['pr_src_state'] == 'open'] = pr_all.loc[pr_all['pr_src_state'] == 'open'].assign(days_to_close=days_to_close_open_pr)
pr_all.loc[pr_all['pr_src_state'] == 'open'].head()
###Output
_____no_output_____
###Markdown
Add `closed_yearmonth` column for only CLOSED pull requests
###Code
# initiate column by setting all null datetimes
pr_all['closed_yearmonth'] = pd.to_datetime(np.nan)
# Fill column with prettified string of year/month closed that looks like: 2019-07-01
pr_all.loc[pr_all['pr_src_state'] == 'closed'] = pr_all.loc[pr_all['pr_src_state'] == 'closed'].assign(
closed_yearmonth = pd.to_datetime(pr_all.loc[pr_all['pr_src_state'] == 'closed']['closed_year'].astype(int
).map(str) + '-' + pr_all.loc[pr_all['pr_src_state'] == 'closed']['closed_month'].astype(int).map(str) + '-01'))
pr_all.loc[pr_all['pr_src_state'] == 'closed']
###Output
_____no_output_____
###Markdown
Add `merged_flag` column which is just prettified strings based off of if the `pr_merged_at` column is null or not
###Code
""" Merged flag """
if 'pr_merged_at' in pr_all.columns.values:
pr_all['pr_merged_at'] = pr_all['pr_merged_at'].fillna(0)
pr_all['merged_flag'] = 'Not Merged / Rejected'
pr_all['merged_flag'].loc[pr_all['pr_merged_at'] != 0] = 'Merged / Accepted'
pr_all['merged_flag'].loc[pr_all['pr_src_state'] == 'open'] = 'Still Open'
del pr_all['pr_merged_at']
pr_all['merged_flag']
###Output
_____no_output_____
###Markdown
Split into different dataframes All, open, closed, and slowest 20% of these 3 categories (6 dataframes total)
###Code
# Isolate the different state PRs for now
pr_open = pr_all.loc[pr_all['pr_src_state'] == 'open']
pr_closed = pr_all.loc[pr_all['pr_src_state'] == 'closed']
pr_merged = pr_all.loc[pr_all['merged_flag'] == 'Merged / Accepted']
pr_not_merged = pr_all.loc[pr_all['merged_flag'] == 'Not Merged / Rejected']
pr_closed['merged_flag']
###Output
_____no_output_____
###Markdown
Create dataframes that contain the slowest 20% pull requests of each group
###Code
# Filtering the 80th percentile slowest PRs
def filter_20_per_slowest(input_df):
pr_slow20_filtered = pd.DataFrame()
pr_slow20_x = pd.DataFrame()
for value in repo_set:
if not pr_slow20_filtered.empty:
pr_slow20x = input_df.query('repo_id==@value')
pr_slow20x['percentile_rank_local'] = pr_slow20x.days_to_close.rank(pct=True)
pr_slow20x = pr_slow20x.query('percentile_rank_local >= .8', )
pr_slow20_filtered = pd.concat([pr_slow20x, pr_slow20_filtered])
reponame = str(value)
filename = ''.join(['output/pr_slowest20pct', reponame, '.csv'])
pr_slow20x.to_csv(filename)
else:
# first time
pr_slow20_filtered = input_df.copy()
pr_slow20_filtered['percentile_rank_local'] = pr_slow20_filtered.days_to_close.rank(pct=True)
pr_slow20_filtered = pr_slow20_filtered.query('percentile_rank_local >= .8', )
# print(pr_slow20_filtered.describe())
return pr_slow20_filtered
pr_slow20_open = filter_20_per_slowest(pr_open)
pr_slow20_closed = filter_20_per_slowest(pr_closed)
pr_slow20_merged = filter_20_per_slowest(pr_merged)
pr_slow20_not_merged = filter_20_per_slowest(pr_not_merged)
pr_slow20_all = filter_20_per_slowest(pr_all)
pr_slow20_merged#.head()
#create dictionairy with number as the key and a letter as the value
#this is used to alias repos when using 'compeitor' display grouping
letters = []
nums = []
alpha = 'a'
for i in range(0, 26):
letters.append(alpha)
alpha = chr(ord(alpha) + 1)
nums.append(i)
letters = [x.upper() for x in letters]
#create dict out of list of numbers and letters
repo_alias_dict = {nums[i]: letters[i] for i in range(len(nums))}
# create dict in the form {repo_id : repo_name}
aliased_repos = []
repo_dict = {}
count = 0
for repo_id in repo_set:
#find corresponding repo name from each repo_id
repo_name = pr_all.loc[pr_all['repo_id'] == repo_id].iloc[0]['repo_name']
#if competitor grouping is enabled turn all repo names, other than the ones in the 'not_aliased_repos' into an alias
if display_grouping == 'competitors' and not repo_id in not_aliased_repos:
repo_name = 'Repo ' + repo_alias_dict[count]
#add repo_id to list of aliased repos, this is used for ordering
aliased_repos.append(repo_id)
count += 1
#add repo_id and repo names as key value pairs into a dict, this is used to label the title of the visualizations
repo_dict.update({repo_id : repo_name})
#gurantees that the non_aliased repos come first when display grouping is set as 'competitors'
repo_list = not_aliased_repos + aliased_repos
###Output
_____no_output_____
###Markdown
Start Visualization Methods
###Code
from bokeh.palettes import Colorblind, mpl, Category20
from bokeh.layouts import gridplot
from bokeh.models.annotations import Title
from bokeh.io import export_png
from bokeh.io import show, output_notebook
from bokeh.models import ColumnDataSource, Legend, LabelSet, Range1d, LinearAxis, Label
from bokeh.plotting import figure
from bokeh.models.glyphs import Rect
from bokeh.transform import dodge
try:
colors = Colorblind[len(repo_set)]
except:
colors = Colorblind[3]
#mpl['Plasma'][len(repo_set)]
#['A6CEE3','B2DF8A','33A02C','FB9A99']
def remove_outliers(input_df, field, num_outliers_repo_map):
df_no_outliers = input_df.copy()
for repo_name, num_outliers in num_outliers_repo_map.items():
indices_to_drop = input_df.loc[input_df['repo_name'] == repo_name].nlargest(num_outliers, field).index
df_no_outliers = df_no_outliers.drop(index=indices_to_drop)
return df_no_outliers
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import datetime as dt
def visualize_mean_days_to_close(input_df, x_axis='closed_yearmonth', description='Closed', save_file=False, num_remove_outliers=0, drop_outliers_repo=None):
# Set the df you want to build the viz's for
driver_df = input_df.copy()
driver_df = driver_df[['repo_id', 'repo_name', 'pr_src_id', 'created_yearmonth', 'closed_yearmonth', 'days_to_close']]
if save_file:
driver_df.to_csv('output/c.westw20small {}.csv'.format(description))
driver_df_mean = driver_df.groupby(['repo_id', x_axis, 'repo_name'],as_index=False).mean()
# Total PRS Closed
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(16, 8)
plotter = sns.lineplot(x=x_axis, y='days_to_close', style='repo_name', data=driver_df_mean, sort=True, legend='full', linewidth=2.5, hue='repo_name').set_title("Average Days to Close of {} Pull Requests, July 2017-January 2020".format(description))
if save_file:
fig.savefig('images/slow_20_mean {}.png'.format(description))
# Copying array and deleting the outlier in the copy to re-visualize
def drop_n_largest(input_df, n, repo_name):
input_df_copy = input_df.copy()
indices_to_drop = input_df.loc[input_df['repo_name'] == 'amazon-freertos'].nlargest(n,'days_to_close').index
print("Indices to drop: {}".format(indices_to_drop))
input_df_copy = input_df_copy.drop(index=indices_to_drop)
input_df_copy.loc[input_df['repo_name'] == repo_name]
return input_df_copy
if num_remove_outliers > 0 and drop_outliers_repo:
driver_df_mean_no_outliers = drop_n_largest(driver_df_mean, num_remove_outliers, drop_outliers_repo)
# Total PRS Closed without outlier
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(16, 8)
plotter = sns.lineplot(x=x_axis, y='days_to_close', style='repo_name', data=driver_df_mean_no_outliers, sort=False, legend='full', linewidth=2.5, hue='repo_name').set_title("Average Days to Close among {} Pull Requests Without Outlier, July 2017-January 2020".format(description))
plotterlabels = ax.set_xticklabels(driver_df_mean_no_outliers[x_axis], rotation=90, fontsize=8)
if save_file:
fig.savefig('images/slow_20_mean_no_outlier {}.png'.format(description))
#visualize_mean_days_to_close(pr_closed, description='All Closed', save_file=False)
from bokeh.models import ColumnDataSource, FactorRange
from bokeh.transform import factor_cmap
def vertical_grouped_bar(input_df, repo_id, group_by = 'merged_flag', x_axis='closed_year', y_axis='num_commits', description='All', title="{}Average Commit Counts Per Year for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook() # let bokeh display plot in jupyter cell output
driver_df = input_df.copy() # deep copy input data so we do not change the external dataframe
# Filter df by passed *repo_id* param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
# Change closed year to int so that doesn't display as 2019.0 for example
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
# contains the closed years
x_groups = sorted(list(driver_df[x_axis].unique()))
# inner groups on x_axis they are merged and not_merged
groups = list(driver_df[group_by].unique())
# setup color pallete
try:
colors = mpl['Plasma'][len(groups)]
except:
colors = [mpl['Plasma'][3][0]] + [mpl['Plasma'][3][1]]
merged_avg_values = list(driver_df.loc[driver_df[group_by] == 'Merged / Accepted'].groupby([x_axis],as_index=False).mean().round(1)['commit_count'])
not_merged_avg_values = list(driver_df.loc[driver_df[group_by] == 'Not Merged / Rejected'].groupby([x_axis],as_index=False).mean().round(1)['commit_count'])
# Setup data in format for grouped bar chart
data = {
'years' : x_groups,
'Merged / Accepted' : merged_avg_values,
'Not Merged / Rejected' : not_merged_avg_values,
}
x = [ (year, pr_state) for year in x_groups for pr_state in groups ]
counts = sum(zip(data['Merged / Accepted'], data['Not Merged / Rejected']), ())
source = ColumnDataSource(data=dict(x=x, counts=counts))
title_beginning = '{}: '.format(repo_dict[repo_id])
title=title.format(title_beginning, description)
plot_width = len(x_groups) * 300
title_text_font_size = 16
if (len(title) * title_text_font_size / 2) > plot_width:
plot_width = int(len(title) * title_text_font_size / 2) + 40
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=plot_width, title=title, y_range=(0, max(merged_avg_values + not_merged_avg_values)*1.15), toolbar_location=None)
# Vertical bar glyph
p.vbar(x='x', top='counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2))
# Data label
labels = LabelSet(x='x', y='counts', text='counts',# y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="black",
source=source, text_align='center')
p.add_layout(labels)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.yaxis.axis_label = 'Average Commits / Pull Request'
p.xaxis.axis_label = 'Year Closed'
p.title.align = "center"
p.title.text_font_size = "{}px".format(title_text_font_size)
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "15px"
p.yaxis.axis_label_text_font_size = "15px"
p.yaxis.major_label_text_font_size = "15px"
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average commits per pull requests over an entire year, for merged and not merged pull requests."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
#show(p)
if save_files:
export_png(grid, filename="./images/v_grouped_bar/v_grouped_bar__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#vertical_grouped_bar(pr_all, repo_id=repo_list)
def vertical_grouped_bar_line_counts(input_df, repo_id, x_axis='closed_year', y_max1=600000, y_max2=1000, description="", title ="", save_file=False):
output_notebook() # let bokeh display plot in jupyter cell output
driver_df = input_df.copy() # deep copy input data so we do not change the external dataframe
# Filter df by passed *repo_id* param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
# Change closed year to int so that doesn't display as 2019.0 for example
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
# contains the closed years
x_groups = sorted(list(driver_df[x_axis].unique()))
groups = ['Lines Added', 'Lines Removed', 'Files Changed']
# setup color pallete
colors = mpl['Plasma'][3]
display(pr_all[pr_all['lines_added'].notna()])#.groupby([x_axis],as_index=False).mean())
files_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['file_count'])
added_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['lines_added'])
removed_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['lines_removed'])
display(driver_df.groupby([x_axis],as_index=False).mean())
print(files_avg_values)
print(added_avg_values)
print(removed_avg_values)
# Setup data in format for grouped bar chart
data = {
'years' : x_groups,
'Lines Added' : added_avg_values,
'Lines Removed' : removed_avg_values,
'Files Changed' : files_avg_values
}
x = [ (year, pr_state) for year in x_groups for pr_state in groups ]
line_counts = sum(zip(data['Lines Added'], data['Lines Removed'], [0]*len(x_groups)), ())
file_counts = sum(zip([0]*len(x_groups),[0]*len(x_groups),data['Files Changed']), ())
print(line_counts)
print(file_counts)
source = ColumnDataSource(data=dict(x=x, line_counts=line_counts, file_counts=file_counts))
if y_max1:
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=700, title=title.format(description), y_range=(0,y_max1))
else:
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=700, title=title.format(description))
# Setting the second y axis range name and range
p.extra_y_ranges = {"file_counts": Range1d(start=0, end=y_max2)}
# Adding the second axis to the plot.
p.add_layout(LinearAxis(y_range_name="file_counts"), 'right')
# Data label for line counts
labels = LabelSet(x='x', y='line_counts', text='line_counts',y_offset=8,# x_offset=34,
text_font_size="10pt", text_color="black",
source=source, text_align='center')
p.add_layout(labels)
# Vertical bar glyph for line counts
p.vbar(x='x', top='line_counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2))
# Data label for file counts
labels = LabelSet(x='x', y='file_counts', text='file_counts', y_offset=0, #x_offset=34,
text_font_size="10pt", text_color="black",
source=source, text_align='center', y_range_name="file_counts")
p.add_layout(labels)
# Vertical bar glyph for file counts
p.vbar(x='x', top='file_counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2), y_range_name="file_counts")
p.left[0].formatter.use_scientific = False
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.yaxis.axis_label = 'Average Commits / Pull Request'
p.xaxis.axis_label = 'Year Closed'
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
show(p)
if save_files:
export_png(p, filename="./images/v_grouped_bar/v_grouped_bar__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
""" THIS VIZ IS NOT READY YET , BUT UNCOMMENT LINE BELOW IF YOU WANT TO SEE"""
# vertical_grouped_bar_line_counts(pr_all, description='All', title="Average Size Metrics Per Year for {} Merged Pull Requests in Master", save_file=False, y_max1=580000, y_max2=1100)
None
def horizontal_stacked_bar(input_df, repo_id, group_by='merged_flag', x_axis='comment_count', description="All Closed", y_axis='closed_year', title="Mean Comments for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
driver_df = input_df.copy()
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
output_notebook()
try:
y_groups = sorted(list(driver_df[y_axis].unique()))
except:
y_groups = [repo_id]
groups = driver_df[group_by].unique()
try:
colors = mpl['Plasma'][len(groups)]
except:
colors = [mpl['Plasma'][3][0]] + [mpl['Plasma'][3][1]]
len_not_merged = len(driver_df.loc[driver_df['merged_flag'] == 'Not Merged / Rejected'])
len_merged = len(driver_df.loc[driver_df['merged_flag'] == 'Merged / Accepted'])
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 650
p = figure(y_range=y_groups, plot_height=450, plot_width=plot_width, # y_range=y_groups,#(pr_all[y_axis].min(),pr_all[y_axis].max()) #y_axis_type="datetime",
title='{} {}'.format(title_beginning, title.format(description)), toolbar_location=None)
possible_maximums= []
for y_value in y_groups:
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
if len(y_merged_data) > 0:
y_merged_data[x_axis + '_mean'] = y_merged_data[x_axis].mean().round(1)
else:
y_merged_data[x_axis + '_mean'] = 0.00
if len(y_not_merged_data) > 0:
y_not_merged_data[x_axis + '_mean'] = y_not_merged_data[x_axis].mean().round(1)
else:
y_not_merged_data[x_axis + '_mean'] = 0
not_merged_source = ColumnDataSource(y_not_merged_data)
merged_source = ColumnDataSource(y_merged_data)
possible_maximums.append(max(y_not_merged_data[x_axis + '_mean']))
possible_maximums.append(max(y_merged_data[x_axis + '_mean']))
# mean comment count for merged
merged_comment_count_glyph = p.hbar(y=dodge(y_axis, -0.1, range=p.y_range), left=0, right=x_axis + '_mean', height=0.04*len(driver_df[y_axis].unique()),
source=merged_source, fill_color="black")#,legend_label="Mean Days to Close",
# Data label
labels = LabelSet(x=x_axis + '_mean', y=dodge(y_axis, -0.1, range=p.y_range), text=x_axis + '_mean', y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="black",
source=merged_source, text_align='center')
p.add_layout(labels)
# mean comment count For nonmerged
not_merged_comment_count_glyph = p.hbar(y=dodge(y_axis, 0.1, range=p.y_range), left=0, right=x_axis + '_mean',
height=0.04*len(driver_df[y_axis].unique()), source=not_merged_source, fill_color="#e84d60")#legend_label="Mean Days to Close",
# Data label
labels = LabelSet(x=x_axis + '_mean', y=dodge(y_axis, 0.1, range=p.y_range), text=x_axis + '_mean', y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="#e84d60",
source=not_merged_source, text_align='center')
p.add_layout(labels)
# p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "bottom_right"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.xaxis.axis_label = 'Average Comments / Pull Request'
p.yaxis.axis_label = 'Repository' if y_axis == 'repo_name' else 'Year Closed' if y_axis == 'closed_year' else ''
legend = Legend(
items=[
("Merged Pull Request Mean Comment Count", [merged_comment_count_glyph]),
("Rejected Pull Request Mean Comment Count", [not_merged_comment_count_glyph])
],
location='center',
orientation='vertical',
border_line_color="black"
)
p.add_layout(legend, "below")
p.title.text_font_size = "16px"
p.title.align = "center"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.x_range = Range1d(0, max(possible_maximums)*1.15)
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average number of comments per merged or not merged pull request."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
#show(p, plot_width=1200, plot_height=300*len(y_groups) + 300)
if save_files:
repo_extension = 'All' if not repo_name else repo_name
export_png(grid, filename="./images/h_stacked_bar_mean_comments_merged_status/mean_comments_merged_status__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#horizontal_stacked_bar(pr_closed, repo_id=repo_list)
def merged_ratio_vertical_grouped_bar(data_dict, repo_id, x_axis='closed_year', description="All Closed", title="Count of {} Pull Requests by Merged Status"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
colors = mpl['Plasma'][6]
#if repo_name == 'mbed-os':
#colors = colors[::-1]
for data_desc, input_df in data_dict.items():
x_groups = sorted(list(input_df[x_axis].astype(str).unique()))
break
plot_width = 315 * len(x_groups)
title_beginning = repo_dict[repo_id]
p = figure(x_range=x_groups, plot_height=350, plot_width=plot_width,
title='{}: {}'.format(title_beginning, title.format(description)), toolbar_location=None)
dodge_amount = 0.12
color_index = 0
x_offset = 50
all_totals = []
for data_desc, input_df in data_dict.items():
driver_df = input_df.copy()
driver_df[x_axis] = driver_df[x_axis].astype(str)
groups = sorted(list(driver_df['merged_flag'].unique()))
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
len_merged = []
zeros = []
len_not_merged = []
totals = []
for x_group in x_groups:
len_merged_entry = len(driver_df.loc[(driver_df['merged_flag'] == 'Merged / Accepted') & (driver_df[x_axis] == x_group)])
totals += [len(driver_df.loc[(driver_df['merged_flag'] == 'Not Merged / Rejected') & (driver_df[x_axis] == x_group)]) + len_merged_entry]
len_not_merged += [len(driver_df.loc[(driver_df['merged_flag'] == 'Not Merged / Rejected') & (driver_df[x_axis] == x_group)])]
len_merged += [len_merged_entry]
zeros.append(0)
data = {'X': x_groups}
for group in groups:
data[group] = []
for x_group in x_groups:
data[group] += [len(driver_df.loc[(driver_df['merged_flag'] == group) & (driver_df[x_axis] == x_group)])]
data['len_merged'] = len_merged
data['len_not_merged'] = len_not_merged
data['totals'] = totals
data['zeros'] = zeros
if data_desc == "All":
all_totals = totals
source = ColumnDataSource(data)
stacked_bar = p.vbar_stack(groups, x=dodge('X', dodge_amount, range=p.x_range), width=0.2, source=source, color=colors[1:3], legend_label=[f"{data_desc} " + "%s" % x for x in groups])
# Data label for merged
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='zeros', text='len_merged', y_offset=2, x_offset=x_offset,
text_font_size="12pt", text_color=colors[1:3][0],
source=source, text_align='center')
)
if min(data['totals']) < 400:
y_offset = 15
else:
y_offset = 0
# Data label for not merged
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='totals', text='len_not_merged', y_offset=y_offset, x_offset=x_offset,
text_font_size="12pt", text_color=colors[1:3][1],
source=source, text_align='center')
)
# Data label for total
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='totals', text='totals', y_offset=0, x_offset=0,
text_font_size="12pt", text_color='black',
source=source, text_align='center')
)
dodge_amount *= -1
colors = colors[::-1]
x_offset *= -1
p.y_range = Range1d(0, max(all_totals)*1.4)
p.xgrid.grid_line_color = None
p.legend.location = "top_center"
p.legend.orientation="horizontal"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.yaxis.axis_label = 'Count of Pull Requests'
p.xaxis.axis_label = 'Repository' if x_axis == 'repo_name' else 'Year Closed' if x_axis == 'closed_year' else ''
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.outline_line_color = None
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the number of closed pull requests per year in four different categories. These four categories are All Merged, All Not Merged, Slowest 20% Merged, and Slowest 20% Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/v_stacked_bar_merged_status_count/stacked_bar_merged_status_count__{}_PRs__xaxis_{}__repo_{}.png".format(description, x_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#erged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
def visualize_mean_response_times(input_df, repo_id, time_unit='days', x_max=95, y_axis='closed_year', description="All Closed", legend_position=(410, 10)):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook() # let bokeh show plot in jupyter cell output
driver_df = input_df.copy()[['repo_name', 'repo_id', 'merged_flag', y_axis, time_unit + '_to_first_response', time_unit + '_to_last_response',
time_unit + '_to_close']] # deep copy input data so we do not alter the external dataframe
# filter by repo_id param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 950
p = figure(toolbar_location=None, y_range=sorted(driver_df[y_axis].unique()), plot_width=plot_width,
plot_height=450,#75*len(driver_df[y_axis].unique()),
title="{}Mean Response Times for Pull Requests {}".format(title_beginning, description))
first_response_glyphs = []
last_response_glyphs = []
merged_days_to_close_glyphs = []
not_merged_days_to_close_glyphs = []
possible_maximums = []
for y_value in driver_df[y_axis].unique():
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
y_merged_data[time_unit + '_to_first_response_mean'] = y_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_last_response_mean'] = y_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_close_mean'] = y_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_first_response_mean'] = y_not_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_last_response_mean'] = y_not_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_close_mean'] = y_not_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
possible_maximums.append(max(y_merged_data[time_unit + '_to_close_mean']))
possible_maximums.append(max(y_not_merged_data[time_unit + '_to_close_mean']))
maximum = max(possible_maximums)*1.15
ideal_difference = maximum*0.064
for y_value in driver_df[y_axis].unique():
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
y_merged_data[time_unit + '_to_first_response_mean'] = y_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_last_response_mean'] = y_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_close_mean'] = y_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_first_response_mean'] = y_not_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_last_response_mean'] = y_not_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_close_mean'] = y_not_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
not_merged_source = ColumnDataSource(y_not_merged_data)
merged_source = ColumnDataSource(y_merged_data)
# mean PR length for merged
merged_days_to_close_glyph = p.hbar(y=dodge(y_axis, -0.1, range=p.y_range), left=0, right=time_unit + '_to_close_mean', height=0.04*len(driver_df[y_axis].unique()),
source=merged_source, fill_color="black")#,legend_label="Mean Days to Close",
merged_days_to_close_glyphs.append(merged_days_to_close_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_close_mean', y=dodge(y_axis, -0.1, range=p.y_range), text=time_unit + '_to_close_mean', y_offset=-8, x_offset=34, #34
text_font_size="12pt", text_color="black",
source=merged_source, text_align='center')
p.add_layout(labels)
# mean PR length For nonmerged
not_merged_days_to_close_glyph = p.hbar(y=dodge(y_axis, 0.1, range=p.y_range), left=0, right=time_unit + '_to_close_mean',
height=0.04*len(driver_df[y_axis].unique()), source=not_merged_source, fill_color="#e84d60")#legend_label="Mean Days to Close",
not_merged_days_to_close_glyphs.append(not_merged_days_to_close_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_close_mean', y=dodge(y_axis, 0.1, range=p.y_range), text=time_unit + '_to_close_mean', y_offset=-8, x_offset=44,
text_font_size="12pt", text_color="#e84d60",
source=not_merged_source, text_align='center')
p.add_layout(labels)
#if the difference between two values is less than 6.4 percent move the second one to the right 30 pixels
if (max(y_merged_data[time_unit + '_to_last_response_mean']) - max(y_merged_data[time_unit + '_to_first_response_mean'])) < ideal_difference:
merged_x_offset = 30
else:
merged_x_offset = 0
#if the difference between two values is less than 6.4 percent move the second one to the right 30 pixels
if (max(y_not_merged_data[time_unit + '_to_last_response_mean']) - max(y_not_merged_data[time_unit + '_to_first_response_mean'])) < ideal_difference:
not_merged_x_offset = 30
else:
not_merged_x_offset = 0
#if there is only one bar set the y_offsets so the labels will not overlap the bars
if len(driver_df[y_axis].unique()) == 1:
merged_y_offset = -65
not_merged_y_offset = 45
else:
merged_y_offset = -45
not_merged_y_offset = 25
# mean time to first response
glyph = Rect(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, -0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[0])
first_response_glyph = p.add_glyph(merged_source, glyph)
first_response_glyphs.append(first_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0, range=p.y_range),text=time_unit + '_to_first_response_mean',x_offset = 0, y_offset=merged_y_offset,#-60,
text_font_size="12pt", text_color=colors[0],
source=merged_source, text_align='center')
p.add_layout(labels)
#for nonmerged
glyph = Rect(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[0])
first_response_glyph = p.add_glyph(not_merged_source, glyph)
first_response_glyphs.append(first_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0, range=p.y_range),text=time_unit + '_to_first_response_mean',x_offset = 0, y_offset=not_merged_y_offset,#40,
text_font_size="12pt", text_color=colors[0],
source=not_merged_source, text_align='center')
p.add_layout(labels)
# mean time to last response
glyph = Rect(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, -0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[1])
last_response_glyph = p.add_glyph(merged_source, glyph)
last_response_glyphs.append(last_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0, range=p.y_range), text=time_unit + '_to_last_response_mean', x_offset=merged_x_offset, y_offset=merged_y_offset,#-60,
text_font_size="12pt", text_color=colors[1],
source=merged_source, text_align='center')
p.add_layout(labels)
#for nonmerged
glyph = Rect(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[1])
last_response_glyph = p.add_glyph(not_merged_source, glyph)
last_response_glyphs.append(last_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0, range=p.y_range), text=time_unit + '_to_last_response_mean', x_offset = not_merged_x_offset, y_offset=not_merged_y_offset,#40,
text_font_size="12pt", text_color=colors[1],
source=not_merged_source, text_align='center')
p.add_layout(labels)
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label = "Days to Close"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
#adjust the starting point and ending point based on the maximum of maximum of the graph
p.x_range = Range1d(maximum/30 * -1, maximum*1.15)
p.yaxis.axis_label = "Repository" if y_axis == 'repo_name' else 'Year Closed' if y_axis == 'closed_year' else ''
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.ygrid.grid_line_color = None
p.y_range.range_padding = 0.15
p.outline_line_color = None
p.toolbar.logo = None
p.toolbar_location = None
def add_legend(location, orientation, side):
legend = Legend(
items=[
("Mean Days to First Response", first_response_glyphs),
("Mean Days to Last Response", last_response_glyphs),
("Merged Mean Days to Close", merged_days_to_close_glyphs),
("Not Merged Mean Days to Close", not_merged_days_to_close_glyphs)
],
location=location,
orientation=orientation,
border_line_color="black"
# title='Example Title'
)
p.add_layout(legend, side)
# add_legend((150, 50), "horizontal", "center")
add_legend(legend_position, "vertical", "right")
plot = p
p = figure(width = plot_width, height = 200, margin = (0, 0, 0, 0))
caption = "Caption Here"
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
export_png(grid, filename="./images/hbar_response_times/mean_response_times__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
# for repo_name in pr_closed['repo_name'].unique():
#visualize_mean_response_times(pr_closed, repo_id=repo_list, legend_position='center')
def visualize_mean_time_between_responses(data_dict, repo_id, time_unit='Days', x_axis='closed_yearmonth', description="All Closed", line_group='merged_flag', y_axis='average_days_between_responses', num_outliers_repo_map={}):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
plot_width = 950
p1 = figure(x_axis_type="datetime", title="{}: Mean {} Between Comments by Month Closed for {} Pull Requests".format(repo_dict[repo_id], time_unit, description), plot_width=plot_width, x_range=(pr_all[x_axis].min(),pr_all[x_axis].max()), plot_height=500, toolbar_location=None)
colors = Category20[10][6:]
color_index = 0
glyphs = []
possible_maximums = []
for data_desc, input_df in data_dict.items():
driver_df = input_df.copy()
driver_df = remove_outliers(driver_df, y_axis, num_outliers_repo_map)
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
index = 0
driver_df_mean = driver_df.groupby(['repo_id', line_group, x_axis],as_index=False).mean()
title_ending = ''
if repo_id:
title_ending += ' for Repo: {}'.format(repo_id)
for group_num, line_group_value in enumerate(driver_df[line_group].unique(), color_index):
glyphs.append(p1.line(driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][x_axis], driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][y_axis], color=colors[group_num], line_width = 3))
color_index += 1
possible_maximums.append(max(driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][y_axis].dropna()))
for repo, num_outliers in num_outliers_repo_map.items():
if repo_name == repo:
p1.add_layout(Title(text="** {} outliers for {} were removed".format(num_outliers, repo), align="center"), "below")
p1.grid.grid_line_alpha = 0.3
p1.xaxis.axis_label = 'Month Closed'
p1.xaxis.ticker.desired_num_ticks = 15
p1.yaxis.axis_label = 'Mean {} Between Responses'.format(time_unit)
p1.legend.location = "top_left"
legend = Legend(
items=[
("All Not Merged / Rejected", [glyphs[0]]),
("All Merged / Accepted", [glyphs[1]]),
("Slowest 20% Not Merged / Rejected", [glyphs[2]]),
("Slowest 20% Merged / Accepted", [glyphs[3]])
],
location='center_right',
orientation='vertical',
border_line_color="black"
)
p1.add_layout(legend, 'right')
p1.title.text_font_size = "16px"
p1.xaxis.axis_label_text_font_size = "16px"
p1.xaxis.major_label_text_font_size = "16px"
p1.yaxis.axis_label_text_font_size = "16px"
p1.yaxis.major_label_text_font_size = "16px"
p1.xaxis.major_label_orientation = 45.0
p1.y_range = Range1d(0, max(possible_maximums)*1.15)
plot = p1
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average number of days between comments for all closed pull requests per month in four categories. These four categories are All Merged, All Not Merged, Slowest 20% Merged, and Slowest 20% Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_name else repo_name
export_png(grid, filename="./images/line_mean_time_between_comments/line_mean_time_between_comments__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
def visualize_time_to_first_comment(input_df, repo_id, x_axis='pr_closed_at', y_axis='days_to_first_response', description='All', num_outliers_repo_map={}, group_by='merged_flag', same_scales=True, columns=2, legend_position='top_right', remove_outliers = 0):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
driver_df = input_df.copy()
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
group_by_groups = sorted(driver_df[group_by].unique())
seconds = ((driver_df[x_axis].max() + datetime.timedelta(days=25))- (driver_df[x_axis].min() - datetime.timedelta(days=30))).total_seconds()
quarter_years = seconds / 10506240
quarter_years = round(quarter_years)
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 180 * 5
p = figure(x_range=(driver_df[x_axis].min() - datetime.timedelta(days=30), driver_df[x_axis].max() + datetime.timedelta(days=25)),
#(driver_df[y_axis].min(), driver_df[y_axis].max()),
toolbar_location=None,
title='{}Days to First Response for {} Closed Pull Requests'.format(title_beginning, description), plot_width=plot_width,
plot_height=400, x_axis_type='datetime')
for index, group_by_group in enumerate(group_by_groups):
p.scatter(x_axis, y_axis, color=colors[index], marker="square", source=driver_df.loc[driver_df[group_by] == group_by_group], legend_label=group_by_group)
if group_by_group == "Merged / Accepted":
merged_values = driver_df.loc[driver_df[group_by] == group_by_group][y_axis].dropna().values.tolist()
else:
not_merged_values = driver_df.loc[driver_df[group_by] == group_by_group][y_axis].dropna().values.tolist()
values = not_merged_values + merged_values
#values.fillna(0)
for value in range(0, remove_outliers):
values.remove(max(values))
#determine y_max by finding the max of the values and scaling it up a small amoutn
y_max = max(values)*1.0111
outliers = driver_df.loc[driver_df[y_axis] > y_max]
if len(outliers) > 0:
if repo_id:
p.add_layout(Title(text="** Outliers cut off at {} days: {} outlier(s) for {} were removed **".format(y_max, len(outliers), repo_name), align="center"), "below")
else:
p.add_layout(Title(text="** Outliers cut off at {} days: {} outlier(s) were removed **".format(y_max, len(outliers)), align="center"), "below")
p.xaxis.axis_label = 'Date Closed' if x_axis == 'pr_closed_at' else 'Date Created' if x_axis == 'pr_created_at' else 'Date'
p.yaxis.axis_label = 'Days to First Response'
p.legend.location = legend_position
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.y_range = Range1d(0, y_max)
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the days to first reponse for individual pull requests, either Merged or Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = repo_ids
export_png(grid, filename="./images/first_comment_times/scatter_first_comment_times__{}_PRs__xaxis_{}__repo_{}.png".format(description, x_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#visualize_time_to_first_comment(pr_closed, repo_id= repo_list, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
def hex_to_RGB(hex):
''' "#FFFFFF" -> [255,255,255] '''
# Pass 16 to the integer function for change of base
return [int(hex[i:i+2], 16) for i in range(1,6,2)]
def color_dict(gradient):
''' Takes in a list of RGB sub-lists and returns dictionary of
colors in RGB and hex form for use in a graphing function
defined later on '''
return {"hex":[RGB_to_hex(RGB) for RGB in gradient],
"r":[RGB[0] for RGB in gradient],
"g":[RGB[1] for RGB in gradient],
"b":[RGB[2] for RGB in gradient]}
def RGB_to_hex(RGB):
''' [255,255,255] -> "#FFFFFF" '''
# Components need to be integers for hex to make sense
RGB = [int(x) for x in RGB]
return "#"+"".join(["0{0:x}".format(v) if v < 16 else
"{0:x}".format(v) for v in RGB])
def linear_gradient(start_hex, finish_hex="#FFFFFF", n=10):
''' returns a gradient list of (n) colors between
two hex colors. start_hex and finish_hex
should be the full six-digit color string,
inlcuding the number sign ("#FFFFFF") '''
# Starting and ending colors in RGB form
s = hex_to_RGB(start_hex)
f = hex_to_RGB(finish_hex)
# Initilize a list of the output colors with the starting color
RGB_list = [s]
# Calcuate a color at each evenly spaced value of t from 1 to n
for t in range(1, n):
# Interpolate RGB vector for color at the current value of t
curr_vector = [
int(s[j] + (float(t)/(n-1))*(f[j]-s[j]))
for j in range(3)
]
# Add it to our list of output colors
RGB_list.append(curr_vector)
return color_dict(RGB_list)
from bokeh.models import BasicTicker, ColorBar, LinearColorMapper, PrintfTickFormatter, LogTicker, Label
from bokeh.transform import transform
def events_types_heat_map(input_df, repo_id, include_comments=True, x_axis='closed_year', facet="merged_flag",columns=2, x_max=1100, same_scales=True, y_axis='repo_name', description="All Closed", title="Average Pull Request Event Types for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
colors = linear_gradient('#f5f5dc', '#fff44f', 150)['hex']
driver_df = input_df.copy()
driver_df[x_axis] = driver_df[x_axis].astype(str)
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
if facet == 'closed_year' or y_axis == 'closed_year':
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
optional_comments = ['comment_count'] if include_comments else []
driver_df = driver_df[['repo_id', 'repo_name',x_axis, 'assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count', facet ] + optional_comments]
y_groups = [
'review_requested_count',
'labeled_count',
'subscribed_count',
'referenced_count',
'closed_count',
# 'milestoned_count',
] + optional_comments
output_notebook()
optional_group_comments = ['comment'] if include_comments else []
# y_groups = ['subscribed', 'mentioned', 'labeled', 'review_requested', 'head_ref_force_pushed', 'referenced', 'closed', 'merged', 'unlabeled', 'head_ref_deleted', 'milestoned', 'assigned'] + optional_group_comments
x_groups = sorted(list(driver_df[x_axis].unique()))
grid_array = []
grid_row = []
for index, facet_group in enumerate(sorted(driver_df[facet].unique())):
facet_data = driver_df.loc[driver_df[facet] == facet_group]
# display(facet_data.sort_values('merged_count', ascending=False).head(50))
driver_df_mean = facet_data.groupby(['repo_id', 'repo_name', x_axis], as_index=False).mean().round(1)
# data = {'Y' : y_groups}
# for group in y_groups:
# data[group] = driver_df_mean[group].tolist()
plot_width = 700
p = figure(y_range=y_groups, plot_height=500, plot_width=plot_width, x_range=x_groups,
title='{}'.format(format(facet_group)))
for y_group in y_groups:
driver_df_mean['field'] = y_group
source = ColumnDataSource(driver_df_mean)
mapper = LinearColorMapper(palette=colors, low=driver_df_mean[y_group].min(), high=driver_df_mean[y_group].max())
p.rect(y='field', x=x_axis, width=1, height=1, source=source,
line_color=None, fill_color=transform(y_group, mapper))
# Data label
labels = LabelSet(x=x_axis, y='field', text=y_group, y_offset=-8,
text_font_size="12pt", text_color='black',
source=source, text_align='center')
p.add_layout(labels)
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=9),
formatter=PrintfTickFormatter(format="%d"))
# p.add_layout(color_bar, 'right')
p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "bottom_right"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.xaxis.axis_label = 'Year Closed'
p.yaxis.axis_label = 'Event Type'
p.title.align = "center"
p.title.text_font_size = "15px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
grid_row.append(p)
if index % columns == columns - 1:
grid_array.append(grid_row)
grid_row = []
grid = gridplot(grid_array)
#add title, the title changes its x value based on the number of x_groups so that it stays centered
label=Label(x=-len(x_groups), y=6.9, text='{}: Average Pull Request Event Types for {} Closed Pull Requests'.format(repo_dict[repo_id], description), render_mode='css', text_font_size = '17px', text_font_style= 'bold')
p.add_layout(label)
show(grid, plot_width=1200, plot_height=1200)
if save_files:
comments_included = 'comments_included' if include_comments else 'comments_not_included'
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/h_stacked_bar_mean_event_types/mean_event_types__facet_{}__{}_PRs__yaxis_{}__{}__repo_{}.png".format(facet, description, y_axis, comments_included, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#events_types_heat_map(pr_closed, repo_id=repo_list)
red_green_gradient = linear_gradient('#0080FF', '#DC143C', 150)['hex']
#32CD32
def heat_map(input_df, repo_id, x_axis='repo_name', group_by='merged_flag', y_axis='closed_yearmonth', same_scales=True, description="All Closed", heat_field='pr_duration_days', columns=2, remove_outliers = 0):
output_notebook()
driver_df = input_df.copy()[['repo_id', y_axis, group_by, x_axis, heat_field]]
print(driver_df)
if display_grouping == 'repo':
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
driver_df[y_axis] = driver_df[y_axis].astype(str)
# add new group by + xaxis column
driver_df['grouped_x'] = driver_df[x_axis] + ' - ' + driver_df[group_by]
driver_df_mean = driver_df.groupby(['grouped_x', y_axis], as_index=False).mean()
colors = red_green_gradient
y_groups = driver_df_mean[y_axis].unique()
x_groups = sorted(driver_df[x_axis].unique())
grouped_x_groups = sorted(driver_df_mean['grouped_x'].unique())
values = driver_df_mean['pr_duration_days'].values.tolist()
for i in range(0, remove_outliers):
values.remove(max(values))
heat_max = max(values)* 1.02
mapper = LinearColorMapper(palette=colors, low=driver_df_mean[heat_field].min(), high=heat_max)#driver_df_mean[heat_field].max())
source = ColumnDataSource(driver_df_mean)
title_beginning = repo_dict[repo_id] + ':' if not type(repo_id) == type(repo_list) else ''
plot_width = 1100
p = figure(plot_width=plot_width, plot_height=300, title="{} Mean Duration (Days) {} Pull Requests".format(title_beginning,description),
y_range=grouped_x_groups[::-1], x_range=y_groups,
toolbar_location=None, tools="")#, x_axis_location="above")
for x_group in x_groups:
outliers = driver_df_mean.loc[(driver_df_mean[heat_field] > heat_max) & (driver_df_mean['grouped_x'].str.contains(x_group))]
if len(outliers) > 0:
p.add_layout(Title(text="** Outliers capped at {} days: {} outlier(s) for {} were capped at {} **".format(heat_max, len(outliers), x_group, heat_max), align="center"), "below")
p.rect(x=y_axis, y='grouped_x', width=1, height=1, source=source,
line_color=None, fill_color=transform(heat_field, mapper))
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=9),
formatter=PrintfTickFormatter(format="%d"))
p.add_layout(color_bar, 'right')
p.title.align = "center"
p.title.text_font_size = "16px"
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "11pt"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = 1.0
p.xaxis.axis_label = 'Month Closed' if y_axis[0:6] == 'closed' else 'Date Created' if y_axis[0:7] == 'created' else 'Repository' if y_axis == 'repo_name' else ''
# p.yaxis.axis_label = 'Merged Status'
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "14px"
p.yaxis.major_label_text_font_size = "15px"
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "Caption Here"
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/heat_map_pr_duration_merged_status/heat_map_duration_by_merged_status__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#heat_map(pr_closed, repo_id=25502)
if display_grouping == 'repo':
for repo_id in repo_set:
vertical_grouped_bar(pr_all, repo_id=repo_id)
horizontal_stacked_bar(pr_closed, repo_id=repo_id)
merged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_id)
visualize_mean_response_times(pr_closed, repo_id=repo_id, legend_position='center')
visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_id)
visualize_time_to_first_comment(pr_closed, repo_id= repo_id, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
events_types_heat_map(pr_closed, repo_id=repo_id)
#print(pr_closed)
pr_duration_frame = pr_closed.assign(pr_duration=(pr_closed['pr_closed_at'] - pr_closed['pr_created_at']))
pr_duration_frame = pr_duration_frame.assign(pr_duration_days = (pr_duration_frame['pr_duration'] / datetime.timedelta(minutes=1))/60/24)
heat_map(pr_duration_frame, repo_id=repo_id)
elif display_grouping == 'competitors':
vertical_grouped_bar(pr_all, repo_id=repo_list)
horizontal_stacked_bar(pr_closed, repo_id=repo_list)
merged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
visualize_mean_response_times(pr_closed, repo_id=repo_list, legend_position='center')
visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
visualize_time_to_first_comment(pr_closed, repo_id= repo_list, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
events_types_heat_map(pr_closed, repo_id=repo_list)
pr_duration_frame = pr_closed.assign(pr_duration=(pr_closed['pr_closed_at'] - pr_closed['pr_created_at']))
pr_duration_frame = pr_duration_frame.assign(pr_duration_days = (pr_duration_frame['pr_duration'] / datetime.timedelta(minutes=1))/60/24)
heat_map(pr_duration_frame, repo_id=repo_list)
###Output
_____no_output_____
###Markdown
Pull Request Analysis Visualization Limitations for Reporting on Several ReposThe visualizations in this notebook are, like most, able to coherently display information for between 1 and 8 different repositories simultaneously. Alternatives for Reporting on Repo Groups, Comprising Many ReposThe included queries could be rewritten to show an entire repository group's characteristics of that is your primary aim. Specifically, any query could replace this line: ``` WHERE repo.repo_id = {repo_id}```with this line to accomplish the goal of comparing different groups of repositories: ``` WHERE repogroups.repo_group_id = {repo_id}```Simply replace the set of id's in the **Pull Request Filter** section with a list of repo_group_id numbers as well, to accomplish this view. ------------
###Code
import psycopg2
import pandas as pd
import sqlalchemy as salc
import matplotlib
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
import datetime
import json
warnings.filterwarnings('ignore')
with open("config.json") as config_file:
config = json.load(config_file)
database_connection_string = 'postgres+psycopg2://{}:{}@{}:{}/{}'.format(config['user'], config['password'], config['host'], config['port'], config['database'])
dbschema='augur_data'
engine = salc.create_engine(
database_connection_string,
connect_args={'options': '-csearch_path={}'.format(dbschema)})
###Output
_____no_output_____
###Markdown
Control Cell
###Code
#declare all repo ids you would like to produce charts for
repo_set = {25440, 25448}
#can be set as 'competitors' or 'repo'
#'competitors' will group graphs by type, so it is easy to compare across repos
# 'repo' will group graphs by repo so it is easy to look at all the contributor data for each repo
display_grouping = 'repo'
#if display_grouping is set to 'competitors', enter the repo ids you do no want to alias, if 'display_grouping' is set to repo the list will not effect anything
not_aliased_repos = [25502, 25583]
begin_date = '2018-01-01'
end_date = '2020-07-30'
#specify number of outliers for removal in scatter plot
scatter_plot_outliers_removed = 5
save_files = False
###Output
_____no_output_____
###Markdown
Identifying the Longest Running Pull Requests Getting the Data
###Code
pr_all = pd.DataFrame()
for repo_id in repo_set:
pr_query = salc.sql.text(f"""
SELECT
repo.repo_id AS repo_id,
pull_requests.pr_src_id AS pr_src_id,
repo.repo_name AS repo_name,
pr_src_author_association,
repo_groups.rg_name AS repo_group,
pull_requests.pr_src_state,
pull_requests.pr_merged_at,
pull_requests.pr_created_at AS pr_created_at,
pull_requests.pr_closed_at AS pr_closed_at,
date_part( 'year', pr_created_at :: DATE ) AS CREATED_YEAR,
date_part( 'month', pr_created_at :: DATE ) AS CREATED_MONTH,
date_part( 'year', pr_closed_at :: DATE ) AS CLOSED_YEAR,
date_part( 'month', pr_closed_at :: DATE ) AS CLOSED_MONTH,
pr_src_meta_label,
pr_head_or_base,
( EXTRACT ( EPOCH FROM pull_requests.pr_closed_at ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_close,
( EXTRACT ( EPOCH FROM pull_requests.pr_closed_at ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_close,
( EXTRACT ( EPOCH FROM first_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_first_response,
( EXTRACT ( EPOCH FROM first_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_first_response,
( EXTRACT ( EPOCH FROM last_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_last_response,
( EXTRACT ( EPOCH FROM last_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_last_response,
first_response_time,
last_response_time,
average_time_between_responses,
assigned_count,
review_requested_count,
labeled_count,
subscribed_count,
mentioned_count,
referenced_count,
closed_count,
head_ref_force_pushed_count,
merged_count,
milestoned_count,
unlabeled_count,
head_ref_deleted_count,
comment_count,
lines_added,
lines_removed,
commit_count,
file_count
FROM
repo,
repo_groups,
pull_requests LEFT OUTER JOIN (
SELECT pull_requests.pull_request_id,
count(*) FILTER (WHERE action = 'assigned') AS assigned_count,
count(*) FILTER (WHERE action = 'review_requested') AS review_requested_count,
count(*) FILTER (WHERE action = 'labeled') AS labeled_count,
count(*) FILTER (WHERE action = 'unlabeled') AS unlabeled_count,
count(*) FILTER (WHERE action = 'subscribed') AS subscribed_count,
count(*) FILTER (WHERE action = 'mentioned') AS mentioned_count,
count(*) FILTER (WHERE action = 'referenced') AS referenced_count,
count(*) FILTER (WHERE action = 'closed') AS closed_count,
count(*) FILTER (WHERE action = 'head_ref_force_pushed') AS head_ref_force_pushed_count,
count(*) FILTER (WHERE action = 'head_ref_deleted') AS head_ref_deleted_count,
count(*) FILTER (WHERE action = 'milestoned') AS milestoned_count,
count(*) FILTER (WHERE action = 'merged') AS merged_count,
MIN(message.msg_timestamp) AS first_response_time,
COUNT(DISTINCT message.msg_timestamp) AS comment_count,
MAX(message.msg_timestamp) AS last_response_time,
(MAX(message.msg_timestamp) - MIN(message.msg_timestamp)) / COUNT(DISTINCT message.msg_timestamp) AS average_time_between_responses
FROM pull_request_events, pull_requests, repo, pull_request_message_ref, message
WHERE repo.repo_id = {repo_id}
AND repo.repo_id = pull_requests.repo_id
AND pull_requests.pull_request_id = pull_request_events.pull_request_id
AND pull_requests.pull_request_id = pull_request_message_ref.pull_request_id
AND pull_request_message_ref.msg_id = message.msg_id
GROUP BY pull_requests.pull_request_id
) response_times
ON pull_requests.pull_request_id = response_times.pull_request_id
LEFT OUTER JOIN (
SELECT pull_request_commits.pull_request_id, count(DISTINCT pr_cmt_sha) AS commit_count FROM pull_request_commits, pull_requests, pull_request_meta
WHERE pull_requests.pull_request_id = pull_request_commits.pull_request_id
AND pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND pr_cmt_sha <> pull_requests.pr_merge_commit_sha
AND pr_cmt_sha <> pull_request_meta.pr_sha
GROUP BY pull_request_commits.pull_request_id
) all_commit_counts
ON pull_requests.pull_request_id = all_commit_counts.pull_request_id
LEFT OUTER JOIN (
SELECT MAX(pr_repo_meta_id), pull_request_meta.pull_request_id, pr_head_or_base, pr_src_meta_label
FROM pull_requests, pull_request_meta
WHERE pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND pr_head_or_base = 'base'
GROUP BY pull_request_meta.pull_request_id, pr_head_or_base, pr_src_meta_label
) base_labels
ON base_labels.pull_request_id = all_commit_counts.pull_request_id
LEFT OUTER JOIN (
SELECT sum(cmt_added) AS lines_added, sum(cmt_removed) AS lines_removed, pull_request_commits.pull_request_id, count(DISTINCT cmt_filename) AS file_count
FROM pull_request_commits, commits, pull_requests, pull_request_meta
WHERE cmt_commit_hash = pr_cmt_sha
AND pull_requests.pull_request_id = pull_request_commits.pull_request_id
AND pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND commits.repo_id = pull_requests.repo_id
AND commits.cmt_commit_hash <> pull_requests.pr_merge_commit_sha
AND commits.cmt_commit_hash <> pull_request_meta.pr_sha
GROUP BY pull_request_commits.pull_request_id
) master_merged_counts
ON base_labels.pull_request_id = master_merged_counts.pull_request_id
WHERE
repo.repo_group_id = repo_groups.repo_group_id
AND repo.repo_id = pull_requests.repo_id
AND repo.repo_id = {repo_id}
ORDER BY
merged_count DESC
""")
pr_a = pd.read_sql(pr_query, con=engine)
if not pr_all.empty:
pr_all = pd.concat([pr_all, pr_a])
else:
# first repo
pr_all = pr_a
display(pr_all.head())
pr_all.dtypes
###Output
_____no_output_____
###Markdown
Begin data pre-processing and adding columns Data type changing
###Code
# change count columns from float datatype to integer
pr_all[['assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count',
'comment_count',
'commit_count',
'file_count',
'lines_added',
'lines_removed'
]] = pr_all[['assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count',
'comment_count',
'commit_count',
'file_count',
'lines_added',
'lines_removed'
]].astype(float)
# Change years to int so that doesn't display as 2019.0 for example
pr_all[[
'created_year',
'closed_year']] = pr_all[['created_year',
'closed_year']].fillna(-1).astype(int).astype(str)
pr_all.dtypes
print(pr_all['repo_name'].unique())
###Output
_____no_output_____
###Markdown
Add `average_days_between_responses` and `average_hours_between_responses` columns
###Code
# Get days for average_time_between_responses time delta
pr_all['average_days_between_responses'] = pr_all['average_time_between_responses'].map(lambda x: x.days).astype(float)
pr_all['average_hours_between_responses'] = pr_all['average_time_between_responses'].map(lambda x: x.days * 24).astype(float)
pr_all.head()
###Output
_____no_output_____
###Markdown
Date filtering entire dataframe
###Code
start_date = pd.to_datetime(begin_date)
# end_date = pd.to_datetime('2020-02-01 09:00:00')
end_date = pd.to_datetime(end_date)
pr_all = pr_all[(pr_all['pr_created_at'] > start_date) & (pr_all['pr_closed_at'] < end_date)]
pr_all['created_year'] = pr_all['created_year'].map(int)
pr_all['created_month'] = pr_all['created_month'].map(int)
pr_all['created_month'] = pr_all['created_month'].map(lambda x: '{0:0>2}'.format(x))
pr_all['created_yearmonth'] = pd.to_datetime(pr_all['created_year'].map(str) + '-' + pr_all['created_month'].map(str) + '-01')
pr_all.head(1)
###Output
_____no_output_____
###Markdown
add `days_to_close` column for pull requests that are still open (closed pull requests already have this column filled from the query)Note: there will be no pull requests that are still open in the dataframe if you filtered by an end date in the above cell
###Code
import datetime
# getting the number of days of (today - created at) for the PRs that are still open
# and putting this in the days_to_close column
# get timedeltas of creation time to todays date/time
days_to_close_open_pr = datetime.datetime.now() - pr_all.loc[pr_all['pr_src_state'] == 'open']['pr_created_at']
# get num days from above timedelta
days_to_close_open_pr = days_to_close_open_pr.apply(lambda x: x.days).astype(int)
# for only OPEN pr's, set the days_to_close column equal to above dataframe
pr_all.loc[pr_all['pr_src_state'] == 'open'] = pr_all.loc[pr_all['pr_src_state'] == 'open'].assign(days_to_close=days_to_close_open_pr)
pr_all.loc[pr_all['pr_src_state'] == 'open'].head()
###Output
_____no_output_____
###Markdown
Add `closed_yearmonth` column for only CLOSED pull requests
###Code
# initiate column by setting all null datetimes
pr_all['closed_yearmonth'] = pd.to_datetime(np.nan)
# Fill column with prettified string of year/month closed that looks like: 2019-07-01
pr_all.loc[pr_all['pr_src_state'] == 'closed'] = pr_all.loc[pr_all['pr_src_state'] == 'closed'].assign(
closed_yearmonth = pd.to_datetime(pr_all.loc[pr_all['pr_src_state'] == 'closed']['closed_year'].astype(int
).map(str) + '-' + pr_all.loc[pr_all['pr_src_state'] == 'closed']['closed_month'].astype(int).map(str) + '-01'))
pr_all.loc[pr_all['pr_src_state'] == 'closed']
###Output
_____no_output_____
###Markdown
Add `merged_flag` column which is just prettified strings based off of if the `pr_merged_at` column is null or not
###Code
""" Merged flag """
if 'pr_merged_at' in pr_all.columns.values:
pr_all['pr_merged_at'] = pr_all['pr_merged_at'].fillna(0)
pr_all['merged_flag'] = 'Not Merged / Rejected'
pr_all['merged_flag'].loc[pr_all['pr_merged_at'] != 0] = 'Merged / Accepted'
pr_all['merged_flag'].loc[pr_all['pr_src_state'] == 'open'] = 'Still Open'
del pr_all['pr_merged_at']
pr_all['merged_flag']
###Output
_____no_output_____
###Markdown
Split into different dataframes All, open, closed, and slowest 20% of these 3 categories (6 dataframes total)
###Code
# Isolate the different state PRs for now
pr_open = pr_all.loc[pr_all['pr_src_state'] == 'open']
pr_closed = pr_all.loc[pr_all['pr_src_state'] == 'closed']
pr_merged = pr_all.loc[pr_all['merged_flag'] == 'Merged / Accepted']
pr_not_merged = pr_all.loc[pr_all['merged_flag'] == 'Not Merged / Rejected']
pr_closed['merged_flag']
###Output
_____no_output_____
###Markdown
Create dataframes that contain the slowest 20% pull requests of each group
###Code
# Filtering the 80th percentile slowest PRs
def filter_20_per_slowest(input_df):
pr_slow20_filtered = pd.DataFrame()
pr_slow20_x = pd.DataFrame()
for value in repo_set:
if not pr_slow20_filtered.empty:
pr_slow20x = input_df.query('repo_id==@value')
pr_slow20x['percentile_rank_local'] = pr_slow20x.days_to_close.rank(pct=True)
pr_slow20x = pr_slow20x.query('percentile_rank_local >= .8', )
pr_slow20_filtered = pd.concat([pr_slow20x, pr_slow20_filtered])
reponame = str(value)
filename = ''.join(['output/pr_slowest20pct', reponame, '.csv'])
pr_slow20x.to_csv(filename)
else:
# first time
pr_slow20_filtered = input_df.copy()
pr_slow20_filtered['percentile_rank_local'] = pr_slow20_filtered.days_to_close.rank(pct=True)
pr_slow20_filtered = pr_slow20_filtered.query('percentile_rank_local >= .8', )
# print(pr_slow20_filtered.describe())
return pr_slow20_filtered
pr_slow20_open = filter_20_per_slowest(pr_open)
pr_slow20_closed = filter_20_per_slowest(pr_closed)
pr_slow20_merged = filter_20_per_slowest(pr_merged)
pr_slow20_not_merged = filter_20_per_slowest(pr_not_merged)
pr_slow20_all = filter_20_per_slowest(pr_all)
pr_slow20_merged#.head()
#create dictionairy with number as the key and a letter as the value
#this is used to alias repos when using 'compeitor' display grouping
letters = []
nums = []
alpha = 'a'
for i in range(0, 26):
letters.append(alpha)
alpha = chr(ord(alpha) + 1)
nums.append(i)
letters = [x.upper() for x in letters]
#create dict out of list of numbers and letters
repo_alias_dict = {nums[i]: letters[i] for i in range(len(nums))}
# create dict in the form {repo_id : repo_name}
aliased_repos = []
repo_dict = {}
count = 0
for repo_id in repo_set:
#find corresponding repo name from each repo_id
repo_name = pr_all.loc[pr_all['repo_id'] == repo_id].iloc[0]['repo_name']
#if competitor grouping is enabled turn all repo names, other than the ones in the 'not_aliased_repos' into an alias
if display_grouping == 'competitors' and not repo_id in not_aliased_repos:
repo_name = 'Repo ' + repo_alias_dict[count]
#add repo_id to list of aliased repos, this is used for ordering
aliased_repos.append(repo_id)
count += 1
#add repo_id and repo names as key value pairs into a dict, this is used to label the title of the visualizations
repo_dict.update({repo_id : repo_name})
#gurantees that the non_aliased repos come first when display grouping is set as 'competitors'
repo_list = not_aliased_repos + aliased_repos
###Output
_____no_output_____
###Markdown
Start Visualization Methods
###Code
from bokeh.palettes import Colorblind, mpl, Category20
from bokeh.layouts import gridplot
from bokeh.models.annotations import Title
from bokeh.io import export_png
from bokeh.io import show, output_notebook
from bokeh.models import ColumnDataSource, Legend, LabelSet, Range1d, LinearAxis, Label
from bokeh.plotting import figure
from bokeh.models.glyphs import Rect
from bokeh.transform import dodge
try:
colors = Colorblind[len(repo_set)]
except:
colors = Colorblind[3]
#mpl['Plasma'][len(repo_set)]
#['A6CEE3','B2DF8A','33A02C','FB9A99']
def remove_outliers(input_df, field, num_outliers_repo_map):
df_no_outliers = input_df.copy()
for repo_name, num_outliers in num_outliers_repo_map.items():
indices_to_drop = input_df.loc[input_df['repo_name'] == repo_name].nlargest(num_outliers, field).index
df_no_outliers = df_no_outliers.drop(index=indices_to_drop)
return df_no_outliers
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import datetime as dt
def visualize_mean_days_to_close(input_df, x_axis='closed_yearmonth', description='Closed', save_file=False, num_remove_outliers=0, drop_outliers_repo=None):
# Set the df you want to build the viz's for
driver_df = input_df.copy()
driver_df = driver_df[['repo_id', 'repo_name', 'pr_src_id', 'created_yearmonth', 'closed_yearmonth', 'days_to_close']]
if save_file:
driver_df.to_csv('output/c.westw20small {}.csv'.format(description))
driver_df_mean = driver_df.groupby(['repo_id', x_axis, 'repo_name'],as_index=False).mean()
# Total PRS Closed
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(16, 8)
plotter = sns.lineplot(x=x_axis, y='days_to_close', style='repo_name', data=driver_df_mean, sort=True, legend='full', linewidth=2.5, hue='repo_name').set_title("Average Days to Close of {} Pull Requests, July 2017-January 2020".format(description))
if save_file:
fig.savefig('images/slow_20_mean {}.png'.format(description))
# Copying array and deleting the outlier in the copy to re-visualize
def drop_n_largest(input_df, n, repo_name):
input_df_copy = input_df.copy()
indices_to_drop = input_df.loc[input_df['repo_name'] == 'amazon-freertos'].nlargest(n,'days_to_close').index
print("Indices to drop: {}".format(indices_to_drop))
input_df_copy = input_df_copy.drop(index=indices_to_drop)
input_df_copy.loc[input_df['repo_name'] == repo_name]
return input_df_copy
if num_remove_outliers > 0 and drop_outliers_repo:
driver_df_mean_no_outliers = drop_n_largest(driver_df_mean, num_remove_outliers, drop_outliers_repo)
# Total PRS Closed without outlier
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(16, 8)
plotter = sns.lineplot(x=x_axis, y='days_to_close', style='repo_name', data=driver_df_mean_no_outliers, sort=False, legend='full', linewidth=2.5, hue='repo_name').set_title("Average Days to Close among {} Pull Requests Without Outlier, July 2017-January 2020".format(description))
plotterlabels = ax.set_xticklabels(driver_df_mean_no_outliers[x_axis], rotation=90, fontsize=8)
if save_file:
fig.savefig('images/slow_20_mean_no_outlier {}.png'.format(description))
#visualize_mean_days_to_close(pr_closed, description='All Closed', save_file=False)
from bokeh.models import ColumnDataSource, FactorRange
from bokeh.transform import factor_cmap
def vertical_grouped_bar(input_df, repo_id, group_by = 'merged_flag', x_axis='closed_year', y_axis='num_commits', description='All', title="{}Average Commit Counts Per Year for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook() # let bokeh display plot in jupyter cell output
driver_df = input_df.copy() # deep copy input data so we do not change the external dataframe
# Filter df by passed *repo_id* param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
# Change closed year to int so that doesn't display as 2019.0 for example
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
# contains the closed years
x_groups = sorted(list(driver_df[x_axis].unique()))
# inner groups on x_axis they are merged and not_merged
groups = list(driver_df[group_by].unique())
# setup color pallete
try:
colors = mpl['Plasma'][len(groups)]
except:
colors = [mpl['Plasma'][3][0]] + [mpl['Plasma'][3][1]]
merged_avg_values = list(driver_df.loc[driver_df[group_by] == 'Merged / Accepted'].groupby([x_axis],as_index=False).mean().round(1)['commit_count'])
not_merged_avg_values = list(driver_df.loc[driver_df[group_by] == 'Not Merged / Rejected'].groupby([x_axis],as_index=False).mean().round(1)['commit_count'])
# Setup data in format for grouped bar chart
data = {
'years' : x_groups,
'Merged / Accepted' : merged_avg_values,
'Not Merged / Rejected' : not_merged_avg_values,
}
x = [ (year, pr_state) for year in x_groups for pr_state in groups ]
counts = sum(zip(data['Merged / Accepted'], data['Not Merged / Rejected']), ())
source = ColumnDataSource(data=dict(x=x, counts=counts))
title_beginning = '{}: '.format(repo_dict[repo_id])
title=title.format(title_beginning, description)
plot_width = len(x_groups) * 300
title_text_font_size = 16
if (len(title) * title_text_font_size / 2) > plot_width:
plot_width = int(len(title) * title_text_font_size / 2) + 40
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=plot_width, title=title, y_range=(0, max(merged_avg_values + not_merged_avg_values)*1.15), toolbar_location=None)
# Vertical bar glyph
p.vbar(x='x', top='counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2))
# Data label
labels = LabelSet(x='x', y='counts', text='counts',# y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="black",
source=source, text_align='center')
p.add_layout(labels)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.yaxis.axis_label = 'Average Commits / Pull Request'
p.xaxis.axis_label = 'Year Closed'
p.title.align = "center"
p.title.text_font_size = "{}px".format(title_text_font_size)
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "15px"
p.yaxis.axis_label_text_font_size = "15px"
p.yaxis.major_label_text_font_size = "15px"
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average commits per pull requests over an entire year, for merged and not merged pull requests."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
#show(p)
if save_files:
export_png(grid, filename="./images/v_grouped_bar/v_grouped_bar__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#vertical_grouped_bar(pr_all, repo_id=repo_list)
def vertical_grouped_bar_line_counts(input_df, repo_id, x_axis='closed_year', y_max1=600000, y_max2=1000, description="", title ="", save_file=False):
output_notebook() # let bokeh display plot in jupyter cell output
driver_df = input_df.copy() # deep copy input data so we do not change the external dataframe
# Filter df by passed *repo_id* param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
# Change closed year to int so that doesn't display as 2019.0 for example
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
# contains the closed years
x_groups = sorted(list(driver_df[x_axis].unique()))
groups = ['Lines Added', 'Lines Removed', 'Files Changed']
# setup color pallete
colors = mpl['Plasma'][3]
display(pr_all[pr_all['lines_added'].notna()])#.groupby([x_axis],as_index=False).mean())
files_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['file_count'])
added_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['lines_added'])
removed_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['lines_removed'])
display(driver_df.groupby([x_axis],as_index=False).mean())
print(files_avg_values)
print(added_avg_values)
print(removed_avg_values)
# Setup data in format for grouped bar chart
data = {
'years' : x_groups,
'Lines Added' : added_avg_values,
'Lines Removed' : removed_avg_values,
'Files Changed' : files_avg_values
}
x = [ (year, pr_state) for year in x_groups for pr_state in groups ]
line_counts = sum(zip(data['Lines Added'], data['Lines Removed'], [0]*len(x_groups)), ())
file_counts = sum(zip([0]*len(x_groups),[0]*len(x_groups),data['Files Changed']), ())
print(line_counts)
print(file_counts)
source = ColumnDataSource(data=dict(x=x, line_counts=line_counts, file_counts=file_counts))
if y_max1:
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=700, title=title.format(description), y_range=(0,y_max1))
else:
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=700, title=title.format(description))
# Setting the second y axis range name and range
p.extra_y_ranges = {"file_counts": Range1d(start=0, end=y_max2)}
# Adding the second axis to the plot.
p.add_layout(LinearAxis(y_range_name="file_counts"), 'right')
# Data label for line counts
labels = LabelSet(x='x', y='line_counts', text='line_counts',y_offset=8,# x_offset=34,
text_font_size="10pt", text_color="black",
source=source, text_align='center')
p.add_layout(labels)
# Vertical bar glyph for line counts
p.vbar(x='x', top='line_counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2))
# Data label for file counts
labels = LabelSet(x='x', y='file_counts', text='file_counts', y_offset=0, #x_offset=34,
text_font_size="10pt", text_color="black",
source=source, text_align='center', y_range_name="file_counts")
p.add_layout(labels)
# Vertical bar glyph for file counts
p.vbar(x='x', top='file_counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2), y_range_name="file_counts")
p.left[0].formatter.use_scientific = False
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.yaxis.axis_label = 'Average Commits / Pull Request'
p.xaxis.axis_label = 'Year Closed'
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
show(p)
if save_files:
export_png(p, filename="./images/v_grouped_bar/v_grouped_bar__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
""" THIS VIZ IS NOT READY YET , BUT UNCOMMENT LINE BELOW IF YOU WANT TO SEE"""
# vertical_grouped_bar_line_counts(pr_all, description='All', title="Average Size Metrics Per Year for {} Merged Pull Requests in Master", save_file=False, y_max1=580000, y_max2=1100)
None
def horizontal_stacked_bar(input_df, repo_id, group_by='merged_flag', x_axis='comment_count', description="All Closed", y_axis='closed_year', title="Mean Comments for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
driver_df = input_df.copy()
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
output_notebook()
try:
y_groups = sorted(list(driver_df[y_axis].unique()))
except:
y_groups = [repo_id]
groups = driver_df[group_by].unique()
try:
colors = mpl['Plasma'][len(groups)]
except:
colors = [mpl['Plasma'][3][0]] + [mpl['Plasma'][3][1]]
len_not_merged = len(driver_df.loc[driver_df['merged_flag'] == 'Not Merged / Rejected'])
len_merged = len(driver_df.loc[driver_df['merged_flag'] == 'Merged / Accepted'])
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 650
p = figure(y_range=y_groups, plot_height=450, plot_width=plot_width, # y_range=y_groups,#(pr_all[y_axis].min(),pr_all[y_axis].max()) #y_axis_type="datetime",
title='{} {}'.format(title_beginning, title.format(description)), toolbar_location=None)
possible_maximums= []
for y_value in y_groups:
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
if len(y_merged_data) > 0:
y_merged_data[x_axis + '_mean'] = y_merged_data[x_axis].mean().round(1)
else:
y_merged_data[x_axis + '_mean'] = 0.00
if len(y_not_merged_data) > 0:
y_not_merged_data[x_axis + '_mean'] = y_not_merged_data[x_axis].mean().round(1)
else:
y_not_merged_data[x_axis + '_mean'] = 0
not_merged_source = ColumnDataSource(y_not_merged_data)
merged_source = ColumnDataSource(y_merged_data)
possible_maximums.append(max(y_not_merged_data[x_axis + '_mean']))
possible_maximums.append(max(y_merged_data[x_axis + '_mean']))
# mean comment count for merged
merged_comment_count_glyph = p.hbar(y=dodge(y_axis, -0.1, range=p.y_range), left=0, right=x_axis + '_mean', height=0.04*len(driver_df[y_axis].unique()),
source=merged_source, fill_color="black")#,legend_label="Mean Days to Close",
# Data label
labels = LabelSet(x=x_axis + '_mean', y=dodge(y_axis, -0.1, range=p.y_range), text=x_axis + '_mean', y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="black",
source=merged_source, text_align='center')
p.add_layout(labels)
# mean comment count For nonmerged
not_merged_comment_count_glyph = p.hbar(y=dodge(y_axis, 0.1, range=p.y_range), left=0, right=x_axis + '_mean',
height=0.04*len(driver_df[y_axis].unique()), source=not_merged_source, fill_color="#e84d60")#legend_label="Mean Days to Close",
# Data label
labels = LabelSet(x=x_axis + '_mean', y=dodge(y_axis, 0.1, range=p.y_range), text=x_axis + '_mean', y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="#e84d60",
source=not_merged_source, text_align='center')
p.add_layout(labels)
# p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "bottom_right"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.xaxis.axis_label = 'Average Comments / Pull Request'
p.yaxis.axis_label = 'Repository' if y_axis == 'repo_name' else 'Year Closed' if y_axis == 'closed_year' else ''
legend = Legend(
items=[
("Merged Pull Request Mean Comment Count", [merged_comment_count_glyph]),
("Rejected Pull Request Mean Comment Count", [not_merged_comment_count_glyph])
],
location='center',
orientation='vertical',
border_line_color="black"
)
p.add_layout(legend, "below")
p.title.text_font_size = "16px"
p.title.align = "center"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.x_range = Range1d(0, max(possible_maximums)*1.15)
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average number of comments per merged or not merged pull request."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
#show(p, plot_width=1200, plot_height=300*len(y_groups) + 300)
if save_files:
repo_extension = 'All' if not repo_name else repo_name
export_png(grid, filename="./images/h_stacked_bar_mean_comments_merged_status/mean_comments_merged_status__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#horizontal_stacked_bar(pr_closed, repo_id=repo_list)
def merged_ratio_vertical_grouped_bar(data_dict, repo_id, x_axis='closed_year', description="All Closed", title="Count of {} Pull Requests by Merged Status"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
colors = mpl['Plasma'][6]
#if repo_name == 'mbed-os':
#colors = colors[::-1]
for data_desc, input_df in data_dict.items():
x_groups = sorted(list(input_df[x_axis].astype(str).unique()))
break
plot_width = 315 * len(x_groups)
title_beginning = repo_dict[repo_id]
p = figure(x_range=x_groups, plot_height=350, plot_width=plot_width,
title='{}: {}'.format(title_beginning, title.format(description)), toolbar_location=None)
dodge_amount = 0.12
color_index = 0
x_offset = 50
all_totals = []
for data_desc, input_df in data_dict.items():
driver_df = input_df.copy()
driver_df[x_axis] = driver_df[x_axis].astype(str)
groups = sorted(list(driver_df['merged_flag'].unique()))
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
len_merged = []
zeros = []
len_not_merged = []
totals = []
for x_group in x_groups:
len_merged_entry = len(driver_df.loc[(driver_df['merged_flag'] == 'Merged / Accepted') & (driver_df[x_axis] == x_group)])
totals += [len(driver_df.loc[(driver_df['merged_flag'] == 'Not Merged / Rejected') & (driver_df[x_axis] == x_group)]) + len_merged_entry]
len_not_merged += [len(driver_df.loc[(driver_df['merged_flag'] == 'Not Merged / Rejected') & (driver_df[x_axis] == x_group)])]
len_merged += [len_merged_entry]
zeros.append(0)
data = {'X': x_groups}
for group in groups:
data[group] = []
for x_group in x_groups:
data[group] += [len(driver_df.loc[(driver_df['merged_flag'] == group) & (driver_df[x_axis] == x_group)])]
data['len_merged'] = len_merged
data['len_not_merged'] = len_not_merged
data['totals'] = totals
data['zeros'] = zeros
if data_desc == "All":
all_totals = totals
source = ColumnDataSource(data)
stacked_bar = p.vbar_stack(groups, x=dodge('X', dodge_amount, range=p.x_range), width=0.2, source=source, color=colors[1:3], legend_label=[f"{data_desc} " + "%s" % x for x in groups])
# Data label for merged
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='zeros', text='len_merged', y_offset=2, x_offset=x_offset,
text_font_size="12pt", text_color=colors[1:3][0],
source=source, text_align='center')
)
if min(data['totals']) < 400:
y_offset = 15
else:
y_offset = 0
# Data label for not merged
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='totals', text='len_not_merged', y_offset=y_offset, x_offset=x_offset,
text_font_size="12pt", text_color=colors[1:3][1],
source=source, text_align='center')
)
# Data label for total
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='totals', text='totals', y_offset=0, x_offset=0,
text_font_size="12pt", text_color='black',
source=source, text_align='center')
)
dodge_amount *= -1
colors = colors[::-1]
x_offset *= -1
p.y_range = Range1d(0, max(all_totals)*1.4)
p.xgrid.grid_line_color = None
p.legend.location = "top_center"
p.legend.orientation="horizontal"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.yaxis.axis_label = 'Count of Pull Requests'
p.xaxis.axis_label = 'Repository' if x_axis == 'repo_name' else 'Year Closed' if x_axis == 'closed_year' else ''
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.outline_line_color = None
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the number of closed pull requests per year in four different categories. These four categories are All Merged, All Not Merged, Slowest 20% Merged, and Slowest 20% Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/v_stacked_bar_merged_status_count/stacked_bar_merged_status_count__{}_PRs__xaxis_{}__repo_{}.png".format(description, x_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#erged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
def visualize_mean_response_times(input_df, repo_id, time_unit='days', x_max=95, y_axis='closed_year', description="All Closed", legend_position=(410, 10)):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook() # let bokeh show plot in jupyter cell output
driver_df = input_df.copy()[['repo_name', 'repo_id', 'merged_flag', y_axis, time_unit + '_to_first_response', time_unit + '_to_last_response',
time_unit + '_to_close']] # deep copy input data so we do not alter the external dataframe
# filter by repo_id param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 950
p = figure(toolbar_location=None, y_range=sorted(driver_df[y_axis].unique()), plot_width=plot_width,
plot_height=450,#75*len(driver_df[y_axis].unique()),
title="{}Mean Response Times for Pull Requests {}".format(title_beginning, description))
first_response_glyphs = []
last_response_glyphs = []
merged_days_to_close_glyphs = []
not_merged_days_to_close_glyphs = []
possible_maximums = []
for y_value in driver_df[y_axis].unique():
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
y_merged_data[time_unit + '_to_first_response_mean'] = y_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_last_response_mean'] = y_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_close_mean'] = y_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_first_response_mean'] = y_not_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_last_response_mean'] = y_not_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_close_mean'] = y_not_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
possible_maximums.append(max(y_merged_data[time_unit + '_to_close_mean']))
possible_maximums.append(max(y_not_merged_data[time_unit + '_to_close_mean']))
maximum = max(possible_maximums)*1.15
ideal_difference = maximum*0.064
for y_value in driver_df[y_axis].unique():
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
y_merged_data[time_unit + '_to_first_response_mean'] = y_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_last_response_mean'] = y_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_close_mean'] = y_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_first_response_mean'] = y_not_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_last_response_mean'] = y_not_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_close_mean'] = y_not_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
not_merged_source = ColumnDataSource(y_not_merged_data)
merged_source = ColumnDataSource(y_merged_data)
# mean PR length for merged
merged_days_to_close_glyph = p.hbar(y=dodge(y_axis, -0.1, range=p.y_range), left=0, right=time_unit + '_to_close_mean', height=0.04*len(driver_df[y_axis].unique()),
source=merged_source, fill_color="black")#,legend_label="Mean Days to Close",
merged_days_to_close_glyphs.append(merged_days_to_close_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_close_mean', y=dodge(y_axis, -0.1, range=p.y_range), text=time_unit + '_to_close_mean', y_offset=-8, x_offset=34, #34
text_font_size="12pt", text_color="black",
source=merged_source, text_align='center')
p.add_layout(labels)
# mean PR length For nonmerged
not_merged_days_to_close_glyph = p.hbar(y=dodge(y_axis, 0.1, range=p.y_range), left=0, right=time_unit + '_to_close_mean',
height=0.04*len(driver_df[y_axis].unique()), source=not_merged_source, fill_color="#e84d60")#legend_label="Mean Days to Close",
not_merged_days_to_close_glyphs.append(not_merged_days_to_close_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_close_mean', y=dodge(y_axis, 0.1, range=p.y_range), text=time_unit + '_to_close_mean', y_offset=-8, x_offset=44,
text_font_size="12pt", text_color="#e84d60",
source=not_merged_source, text_align='center')
p.add_layout(labels)
#if the difference between two values is less than 6.4 percent move the second one to the right 30 pixels
if (max(y_merged_data[time_unit + '_to_last_response_mean']) - max(y_merged_data[time_unit + '_to_first_response_mean'])) < ideal_difference:
merged_x_offset = 30
else:
merged_x_offset = 0
#if the difference between two values is less than 6.4 percent move the second one to the right 30 pixels
if (max(y_not_merged_data[time_unit + '_to_last_response_mean']) - max(y_not_merged_data[time_unit + '_to_first_response_mean'])) < ideal_difference:
not_merged_x_offset = 30
else:
not_merged_x_offset = 0
#if there is only one bar set the y_offsets so the labels will not overlap the bars
if len(driver_df[y_axis].unique()) == 1:
merged_y_offset = -65
not_merged_y_offset = 45
else:
merged_y_offset = -45
not_merged_y_offset = 25
# mean time to first response
glyph = Rect(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, -0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[0])
first_response_glyph = p.add_glyph(merged_source, glyph)
first_response_glyphs.append(first_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0, range=p.y_range),text=time_unit + '_to_first_response_mean',x_offset = 0, y_offset=merged_y_offset,#-60,
text_font_size="12pt", text_color=colors[0],
source=merged_source, text_align='center')
p.add_layout(labels)
#for nonmerged
glyph = Rect(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[0])
first_response_glyph = p.add_glyph(not_merged_source, glyph)
first_response_glyphs.append(first_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0, range=p.y_range),text=time_unit + '_to_first_response_mean',x_offset = 0, y_offset=not_merged_y_offset,#40,
text_font_size="12pt", text_color=colors[0],
source=not_merged_source, text_align='center')
p.add_layout(labels)
# mean time to last response
glyph = Rect(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, -0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[1])
last_response_glyph = p.add_glyph(merged_source, glyph)
last_response_glyphs.append(last_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0, range=p.y_range), text=time_unit + '_to_last_response_mean', x_offset=merged_x_offset, y_offset=merged_y_offset,#-60,
text_font_size="12pt", text_color=colors[1],
source=merged_source, text_align='center')
p.add_layout(labels)
#for nonmerged
glyph = Rect(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[1])
last_response_glyph = p.add_glyph(not_merged_source, glyph)
last_response_glyphs.append(last_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0, range=p.y_range), text=time_unit + '_to_last_response_mean', x_offset = not_merged_x_offset, y_offset=not_merged_y_offset,#40,
text_font_size="12pt", text_color=colors[1],
source=not_merged_source, text_align='center')
p.add_layout(labels)
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label = "Days to Close"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
#adjust the starting point and ending point based on the maximum of maximum of the graph
p.x_range = Range1d(maximum/30 * -1, maximum*1.15)
p.yaxis.axis_label = "Repository" if y_axis == 'repo_name' else 'Year Closed' if y_axis == 'closed_year' else ''
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.ygrid.grid_line_color = None
p.y_range.range_padding = 0.15
p.outline_line_color = None
p.toolbar.logo = None
p.toolbar_location = None
def add_legend(location, orientation, side):
legend = Legend(
items=[
("Mean Days to First Response", first_response_glyphs),
("Mean Days to Last Response", last_response_glyphs),
("Merged Mean Days to Close", merged_days_to_close_glyphs),
("Not Merged Mean Days to Close", not_merged_days_to_close_glyphs)
],
location=location,
orientation=orientation,
border_line_color="black"
# title='Example Title'
)
p.add_layout(legend, side)
# add_legend((150, 50), "horizontal", "center")
add_legend(legend_position, "vertical", "right")
plot = p
p = figure(width = plot_width, height = 200, margin = (0, 0, 0, 0))
caption = "Caption Here"
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
export_png(grid, filename="./images/hbar_response_times/mean_response_times__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
# for repo_name in pr_closed['repo_name'].unique():
#visualize_mean_response_times(pr_closed, repo_id=repo_list, legend_position='center')
def visualize_mean_time_between_responses(data_dict, repo_id, time_unit='Days', x_axis='closed_yearmonth', description="All Closed", line_group='merged_flag', y_axis='average_days_between_responses', num_outliers_repo_map={}):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
plot_width = 950
p1 = figure(x_axis_type="datetime", title="{}: Mean {} Between Comments by Month Closed for {} Pull Requests".format(repo_dict[repo_id], time_unit, description), plot_width=plot_width, x_range=(pr_all[x_axis].min(),pr_all[x_axis].max()), plot_height=500, toolbar_location=None)
colors = Category20[10][6:]
color_index = 0
glyphs = []
possible_maximums = []
for data_desc, input_df in data_dict.items():
driver_df = input_df.copy()
driver_df = remove_outliers(driver_df, y_axis, num_outliers_repo_map)
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
index = 0
driver_df_mean = driver_df.groupby(['repo_id', line_group, x_axis],as_index=False).mean()
title_ending = ''
if repo_id:
title_ending += ' for Repo: {}'.format(repo_id)
for group_num, line_group_value in enumerate(driver_df[line_group].unique(), color_index):
glyphs.append(p1.line(driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][x_axis], driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][y_axis], color=colors[group_num], line_width = 3))
color_index += 1
possible_maximums.append(max(driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][y_axis].dropna()))
for repo, num_outliers in num_outliers_repo_map.items():
if repo_name == repo:
p1.add_layout(Title(text="** {} outliers for {} were removed".format(num_outliers, repo), align="center"), "below")
p1.grid.grid_line_alpha = 0.3
p1.xaxis.axis_label = 'Month Closed'
p1.xaxis.ticker.desired_num_ticks = 15
p1.yaxis.axis_label = 'Mean {} Between Responses'.format(time_unit)
p1.legend.location = "top_left"
legend = Legend(
items=[
("All Not Merged / Rejected", [glyphs[0]]),
("All Merged / Accepted", [glyphs[1]]),
("Slowest 20% Not Merged / Rejected", [glyphs[2]]),
("Slowest 20% Merged / Accepted", [glyphs[3]])
],
location='center_right',
orientation='vertical',
border_line_color="black"
)
p1.add_layout(legend, 'right')
p1.title.text_font_size = "16px"
p1.xaxis.axis_label_text_font_size = "16px"
p1.xaxis.major_label_text_font_size = "16px"
p1.yaxis.axis_label_text_font_size = "16px"
p1.yaxis.major_label_text_font_size = "16px"
p1.xaxis.major_label_orientation = 45.0
p1.y_range = Range1d(0, max(possible_maximums)*1.15)
plot = p1
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average number of days between comments for all closed pull requests per month in four categories. These four categories are All Merged, All Not Merged, Slowest 20% Merged, and Slowest 20% Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_name else repo_name
export_png(grid, filename="./images/line_mean_time_between_comments/line_mean_time_between_comments__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
def visualize_time_to_first_comment(input_df, repo_id, x_axis='pr_closed_at', y_axis='days_to_first_response', description='All', num_outliers_repo_map={}, group_by='merged_flag', same_scales=True, columns=2, legend_position='top_right', remove_outliers = 0):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
driver_df = input_df.copy()
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
group_by_groups = sorted(driver_df[group_by].unique())
seconds = ((driver_df[x_axis].max() + datetime.timedelta(days=25))- (driver_df[x_axis].min() - datetime.timedelta(days=30))).total_seconds()
quarter_years = seconds / 10506240
quarter_years = round(quarter_years)
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 180 * 5
p = figure(x_range=(driver_df[x_axis].min() - datetime.timedelta(days=30), driver_df[x_axis].max() + datetime.timedelta(days=25)),
#(driver_df[y_axis].min(), driver_df[y_axis].max()),
toolbar_location=None,
title='{}Days to First Response for {} Closed Pull Requests'.format(title_beginning, description), plot_width=plot_width,
plot_height=400, x_axis_type='datetime')
for index, group_by_group in enumerate(group_by_groups):
p.scatter(x_axis, y_axis, color=colors[index], marker="square", source=driver_df.loc[driver_df[group_by] == group_by_group], legend_label=group_by_group)
if group_by_group == "Merged / Accepted":
merged_values = driver_df.loc[driver_df[group_by] == group_by_group][y_axis].dropna().values.tolist()
else:
not_merged_values = driver_df.loc[driver_df[group_by] == group_by_group][y_axis].dropna().values.tolist()
values = not_merged_values + merged_values
#values.fillna(0)
for value in range(0, remove_outliers):
values.remove(max(values))
#determine y_max by finding the max of the values and scaling it up a small amoutn
y_max = max(values)*1.0111
outliers = driver_df.loc[driver_df[y_axis] > y_max]
if len(outliers) > 0:
if repo_id:
p.add_layout(Title(text="** Outliers cut off at {} days: {} outlier(s) for {} were removed **".format(y_max, len(outliers), repo_name), align="center"), "below")
else:
p.add_layout(Title(text="** Outliers cut off at {} days: {} outlier(s) were removed **".format(y_max, len(outliers)), align="center"), "below")
p.xaxis.axis_label = 'Date Closed' if x_axis == 'pr_closed_at' else 'Date Created' if x_axis == 'pr_created_at' else 'Date'
p.yaxis.axis_label = 'Days to First Response'
p.legend.location = legend_position
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.y_range = Range1d(0, y_max)
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the days to first reponse for individual pull requests, either Merged or Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = repo_ids
export_png(grid, filename="./images/first_comment_times/scatter_first_comment_times__{}_PRs__xaxis_{}__repo_{}.png".format(description, x_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#visualize_time_to_first_comment(pr_closed, repo_id= repo_list, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
def hex_to_RGB(hex):
''' "#FFFFFF" -> [255,255,255] '''
# Pass 16 to the integer function for change of base
return [int(hex[i:i+2], 16) for i in range(1,6,2)]
def color_dict(gradient):
''' Takes in a list of RGB sub-lists and returns dictionary of
colors in RGB and hex form for use in a graphing function
defined later on '''
return {"hex":[RGB_to_hex(RGB) for RGB in gradient],
"r":[RGB[0] for RGB in gradient],
"g":[RGB[1] for RGB in gradient],
"b":[RGB[2] for RGB in gradient]}
def RGB_to_hex(RGB):
''' [255,255,255] -> "#FFFFFF" '''
# Components need to be integers for hex to make sense
RGB = [int(x) for x in RGB]
return "#"+"".join(["0{0:x}".format(v) if v < 16 else
"{0:x}".format(v) for v in RGB])
def linear_gradient(start_hex, finish_hex="#FFFFFF", n=10):
''' returns a gradient list of (n) colors between
two hex colors. start_hex and finish_hex
should be the full six-digit color string,
inlcuding the number sign ("#FFFFFF") '''
# Starting and ending colors in RGB form
s = hex_to_RGB(start_hex)
f = hex_to_RGB(finish_hex)
# Initilize a list of the output colors with the starting color
RGB_list = [s]
# Calcuate a color at each evenly spaced value of t from 1 to n
for t in range(1, n):
# Interpolate RGB vector for color at the current value of t
curr_vector = [
int(s[j] + (float(t)/(n-1))*(f[j]-s[j]))
for j in range(3)
]
# Add it to our list of output colors
RGB_list.append(curr_vector)
return color_dict(RGB_list)
from bokeh.models import BasicTicker, ColorBar, LinearColorMapper, PrintfTickFormatter, LogTicker, Label
from bokeh.transform import transform
def events_types_heat_map(input_df, repo_id, include_comments=True, x_axis='closed_year', facet="merged_flag",columns=2, x_max=1100, same_scales=True, y_axis='repo_name', description="All Closed", title="Average Pull Request Event Types for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
colors = linear_gradient('#f5f5dc', '#fff44f', 150)['hex']
driver_df = input_df.copy()
driver_df[x_axis] = driver_df[x_axis].astype(str)
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
if facet == 'closed_year' or y_axis == 'closed_year':
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
optional_comments = ['comment_count'] if include_comments else []
driver_df = driver_df[['repo_id', 'repo_name',x_axis, 'assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count', facet ] + optional_comments]
y_groups = [
'review_requested_count',
'labeled_count',
'subscribed_count',
'referenced_count',
'closed_count',
# 'milestoned_count',
] + optional_comments
output_notebook()
optional_group_comments = ['comment'] if include_comments else []
# y_groups = ['subscribed', 'mentioned', 'labeled', 'review_requested', 'head_ref_force_pushed', 'referenced', 'closed', 'merged', 'unlabeled', 'head_ref_deleted', 'milestoned', 'assigned'] + optional_group_comments
x_groups = sorted(list(driver_df[x_axis].unique()))
grid_array = []
grid_row = []
for index, facet_group in enumerate(sorted(driver_df[facet].unique())):
facet_data = driver_df.loc[driver_df[facet] == facet_group]
# display(facet_data.sort_values('merged_count', ascending=False).head(50))
driver_df_mean = facet_data.groupby(['repo_id', 'repo_name', x_axis], as_index=False).mean().round(1)
# data = {'Y' : y_groups}
# for group in y_groups:
# data[group] = driver_df_mean[group].tolist()
plot_width = 700
p = figure(y_range=y_groups, plot_height=500, plot_width=plot_width, x_range=x_groups,
title='{}'.format(format(facet_group)))
for y_group in y_groups:
driver_df_mean['field'] = y_group
source = ColumnDataSource(driver_df_mean)
mapper = LinearColorMapper(palette=colors, low=driver_df_mean[y_group].min(), high=driver_df_mean[y_group].max())
p.rect(y='field', x=x_axis, width=1, height=1, source=source,
line_color=None, fill_color=transform(y_group, mapper))
# Data label
labels = LabelSet(x=x_axis, y='field', text=y_group, y_offset=-8,
text_font_size="12pt", text_color='black',
source=source, text_align='center')
p.add_layout(labels)
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=9),
formatter=PrintfTickFormatter(format="%d"))
# p.add_layout(color_bar, 'right')
p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "bottom_right"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.xaxis.axis_label = 'Year Closed'
p.yaxis.axis_label = 'Event Type'
p.title.align = "center"
p.title.text_font_size = "15px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
grid_row.append(p)
if index % columns == columns - 1:
grid_array.append(grid_row)
grid_row = []
grid = gridplot(grid_array)
#add title, the title changes its x value based on the number of x_groups so that it stays centered
label=Label(x=-len(x_groups), y=6.9, text='{}: Average Pull Request Event Types for {} Closed Pull Requests'.format(repo_dict[repo_id], description), render_mode='css', text_font_size = '17px', text_font_style= 'bold')
p.add_layout(label)
show(grid, plot_width=1200, plot_height=1200)
if save_files:
comments_included = 'comments_included' if include_comments else 'comments_not_included'
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/h_stacked_bar_mean_event_types/mean_event_types__facet_{}__{}_PRs__yaxis_{}__{}__repo_{}.png".format(facet, description, y_axis, comments_included, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#events_types_heat_map(pr_closed, repo_id=repo_list)
red_green_gradient = linear_gradient('#0080FF', '#DC143C', 150)['hex']
#32CD32
def heat_map(input_df, repo_id, x_axis='repo_name', group_by='merged_flag', y_axis='closed_yearmonth', same_scales=True, description="All Closed", heat_field='days_to_first_response', columns=2, remove_outliers = 0):
output_notebook()
driver_df = input_df.copy()[['repo_id', y_axis, group_by, x_axis, heat_field]]
if display_grouping == 'repo':
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
driver_df[y_axis] = driver_df[y_axis].astype(str)
# add new group by + xaxis column
driver_df['grouped_x'] = driver_df[x_axis] + ' - ' + driver_df[group_by]
driver_df_mean = driver_df.groupby(['grouped_x', y_axis], as_index=False).mean()
colors = red_green_gradient
y_groups = driver_df_mean[y_axis].unique()
x_groups = sorted(driver_df[x_axis].unique())
grouped_x_groups = sorted(driver_df_mean['grouped_x'].unique())
values = driver_df_mean['days_to_first_response'].values.tolist()
for i in range(0, remove_outliers):
values.remove(max(values))
heat_max = max(values)* 1.02
mapper = LinearColorMapper(palette=colors, low=driver_df_mean[heat_field].min(), high=heat_max)#driver_df_mean[heat_field].max())
source = ColumnDataSource(driver_df_mean)
title_beginning = repo_dict[repo_id] + ':' if not type(repo_id) == type(repo_list) else ''
plot_width = 1100
p = figure(plot_width=plot_width, plot_height=300, title="{} Mean Duration (Days) {} Pull Requests".format(title_beginning,description),
y_range=grouped_x_groups[::-1], x_range=y_groups,
toolbar_location=None, tools="")#, x_axis_location="above")
for x_group in x_groups:
outliers = driver_df_mean.loc[(driver_df_mean[heat_field] > heat_max) & (driver_df_mean['grouped_x'].str.contains(x_group))]
if len(outliers) > 0:
p.add_layout(Title(text="** Outliers capped at {} days: {} outlier(s) for {} were capped at {} **".format(heat_max, len(outliers), x_group, heat_max), align="center"), "below")
p.rect(x=y_axis, y='grouped_x', width=1, height=1, source=source,
line_color=None, fill_color=transform(heat_field, mapper))
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=9),
formatter=PrintfTickFormatter(format="%d"))
p.add_layout(color_bar, 'right')
p.title.align = "center"
p.title.text_font_size = "16px"
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "11pt"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = 1.0
p.xaxis.axis_label = 'Month Closed' if y_axis[0:6] == 'closed' else 'Date Created' if y_axis[0:7] == 'created' else 'Repository' if y_axis == 'repo_name' else ''
# p.yaxis.axis_label = 'Merged Status'
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "14px"
p.yaxis.major_label_text_font_size = "15px"
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "Caption Here"
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/heat_map_pr_duration_merged_status/heat_map_duration_by_merged_status__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#heat_map(pr_closed, repo_id=25502)
if display_grouping == 'repo':
for repo_id in repo_set:
vertical_grouped_bar(pr_all, repo_id=repo_id)
horizontal_stacked_bar(pr_closed, repo_id=repo_id)
merged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_id)
visualize_mean_response_times(pr_closed, repo_id=repo_id, legend_position='center')
visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_id)
visualize_time_to_first_comment(pr_closed, repo_id= repo_id, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
events_types_heat_map(pr_closed, repo_id=repo_id)
heat_map(pr_closed, repo_id=repo_id)
elif display_grouping == 'competitors':
vertical_grouped_bar(pr_all, repo_id=repo_list)
horizontal_stacked_bar(pr_closed, repo_id=repo_list)
merged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
visualize_mean_response_times(pr_closed, repo_id=repo_list, legend_position='center')
visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
visualize_time_to_first_comment(pr_closed, repo_id= repo_list, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
events_types_heat_map(pr_closed, repo_id=repo_list)
heat_map(pr_closed, repo_id=repo_list)
###Output
_____no_output_____
###Markdown
Pull Request Analysis Visualization Limitations for Reporting on Several ReposThe visualizations in this notebook are, like most, able to coherently display information for between 1 and 8 different repositories simultaneously. Alternatives for Reporting on Repo Groups, Comprising Many ReposThe included queries could be rewritten to show an entire repository group's characteristics of that is your primary aim. Specifically, any query could replace this line: ``` WHERE repo.repo_id = {repo_id}```with this line to accomplish the goal of comparing different groups of repositories: ``` WHERE repogroups.repo_group_id = {repo_id}```Simply replace the set of id's in the **Pull Request Filter** section with a list of repo_group_id numbers as well, to accomplish this view. ------------
###Code
import psycopg2
import pandas as pd
import sqlalchemy as salc
import matplotlib
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
import datetime
import json
warnings.filterwarnings('ignore')
with open("config.json") as config_file:
config = json.load(config_file)
database_connection_string = 'postgresql+psycopg2://{}:{}@{}:{}/{}'.format(config['user'], config['password'], config['host'], config['port'], config['database'])
dbschema='augur_data'
engine = salc.create_engine(
database_connection_string,
connect_args={'options': '-csearch_path={}'.format(dbschema)})
###Output
_____no_output_____
###Markdown
Control Cell
###Code
#declare all repo ids you would like to produce charts for
repo_set = {25440, 25448}
#can be set as 'competitors' or 'repo'
#'competitors' will group graphs by type, so it is easy to compare across repos
# 'repo' will group graphs by repo so it is easy to look at all the contributor data for each repo
display_grouping = 'repo'
#if display_grouping is set to 'competitors', enter the repo ids you do no want to alias, if 'display_grouping' is set to repo the list will not effect anything
not_aliased_repos = [25440, 25448]
begin_date = '2019-10-01'
end_date = '2020-10-31'
#specify number of outliers for removal in scatter plot
scatter_plot_outliers_removed = 5
save_files = False
###Output
_____no_output_____
###Markdown
Identifying the Longest Running Pull Requests Getting the Data
###Code
pr_all = pd.DataFrame()
for repo_id in repo_set:
pr_query = salc.sql.text(f"""
SELECT
repo.repo_id AS repo_id,
pull_requests.pr_src_id AS pr_src_id,
repo.repo_name AS repo_name,
pr_src_author_association,
repo_groups.rg_name AS repo_group,
pull_requests.pr_src_state,
pull_requests.pr_merged_at,
pull_requests.pr_created_at AS pr_created_at,
pull_requests.pr_closed_at AS pr_closed_at,
date_part( 'year', pr_created_at :: DATE ) AS CREATED_YEAR,
date_part( 'month', pr_created_at :: DATE ) AS CREATED_MONTH,
date_part( 'year', pr_closed_at :: DATE ) AS CLOSED_YEAR,
date_part( 'month', pr_closed_at :: DATE ) AS CLOSED_MONTH,
pr_src_meta_label,
pr_head_or_base,
( EXTRACT ( EPOCH FROM pull_requests.pr_closed_at ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_close,
( EXTRACT ( EPOCH FROM pull_requests.pr_closed_at ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_close,
( EXTRACT ( EPOCH FROM first_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_first_response,
( EXTRACT ( EPOCH FROM first_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_first_response,
( EXTRACT ( EPOCH FROM last_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 3600 AS hours_to_last_response,
( EXTRACT ( EPOCH FROM last_response_time ) - EXTRACT ( EPOCH FROM pull_requests.pr_created_at ) ) / 86400 AS days_to_last_response,
first_response_time,
last_response_time,
average_time_between_responses,
assigned_count,
review_requested_count,
labeled_count,
subscribed_count,
mentioned_count,
referenced_count,
closed_count,
head_ref_force_pushed_count,
merged_count,
milestoned_count,
unlabeled_count,
head_ref_deleted_count,
comment_count,
lines_added,
lines_removed,
commit_count,
file_count
FROM
repo,
repo_groups,
pull_requests LEFT OUTER JOIN (
SELECT pull_requests.pull_request_id,
count(*) FILTER (WHERE action = 'assigned') AS assigned_count,
count(*) FILTER (WHERE action = 'review_requested') AS review_requested_count,
count(*) FILTER (WHERE action = 'labeled') AS labeled_count,
count(*) FILTER (WHERE action = 'unlabeled') AS unlabeled_count,
count(*) FILTER (WHERE action = 'subscribed') AS subscribed_count,
count(*) FILTER (WHERE action = 'mentioned') AS mentioned_count,
count(*) FILTER (WHERE action = 'referenced') AS referenced_count,
count(*) FILTER (WHERE action = 'closed') AS closed_count,
count(*) FILTER (WHERE action = 'head_ref_force_pushed') AS head_ref_force_pushed_count,
count(*) FILTER (WHERE action = 'head_ref_deleted') AS head_ref_deleted_count,
count(*) FILTER (WHERE action = 'milestoned') AS milestoned_count,
count(*) FILTER (WHERE action = 'merged') AS merged_count,
MIN(message.msg_timestamp) AS first_response_time,
COUNT(DISTINCT message.msg_timestamp) AS comment_count,
MAX(message.msg_timestamp) AS last_response_time,
(MAX(message.msg_timestamp) - MIN(message.msg_timestamp)) / COUNT(DISTINCT message.msg_timestamp) AS average_time_between_responses
FROM pull_request_events, pull_requests, repo, pull_request_message_ref, message
WHERE repo.repo_id = {repo_id}
AND repo.repo_id = pull_requests.repo_id
AND pull_requests.pull_request_id = pull_request_events.pull_request_id
AND pull_requests.pull_request_id = pull_request_message_ref.pull_request_id
AND pull_request_message_ref.msg_id = message.msg_id
GROUP BY pull_requests.pull_request_id
) response_times
ON pull_requests.pull_request_id = response_times.pull_request_id
LEFT OUTER JOIN (
SELECT pull_request_commits.pull_request_id, count(DISTINCT pr_cmt_sha) AS commit_count FROM pull_request_commits, pull_requests, pull_request_meta
WHERE pull_requests.pull_request_id = pull_request_commits.pull_request_id
AND pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND pr_cmt_sha <> pull_requests.pr_merge_commit_sha
AND pr_cmt_sha <> pull_request_meta.pr_sha
GROUP BY pull_request_commits.pull_request_id
) all_commit_counts
ON pull_requests.pull_request_id = all_commit_counts.pull_request_id
LEFT OUTER JOIN (
SELECT MAX(pr_repo_meta_id), pull_request_meta.pull_request_id, pr_head_or_base, pr_src_meta_label
FROM pull_requests, pull_request_meta
WHERE pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND pr_head_or_base = 'base'
GROUP BY pull_request_meta.pull_request_id, pr_head_or_base, pr_src_meta_label
) base_labels
ON base_labels.pull_request_id = all_commit_counts.pull_request_id
LEFT OUTER JOIN (
SELECT sum(cmt_added) AS lines_added, sum(cmt_removed) AS lines_removed, pull_request_commits.pull_request_id, count(DISTINCT cmt_filename) AS file_count
FROM pull_request_commits, commits, pull_requests, pull_request_meta
WHERE cmt_commit_hash = pr_cmt_sha
AND pull_requests.pull_request_id = pull_request_commits.pull_request_id
AND pull_requests.pull_request_id = pull_request_meta.pull_request_id
AND pull_requests.repo_id = {repo_id}
AND commits.repo_id = pull_requests.repo_id
AND commits.cmt_commit_hash <> pull_requests.pr_merge_commit_sha
AND commits.cmt_commit_hash <> pull_request_meta.pr_sha
GROUP BY pull_request_commits.pull_request_id
) master_merged_counts
ON base_labels.pull_request_id = master_merged_counts.pull_request_id
WHERE
repo.repo_group_id = repo_groups.repo_group_id
AND repo.repo_id = pull_requests.repo_id
AND repo.repo_id = {repo_id}
ORDER BY
merged_count DESC
""")
pr_a = pd.read_sql(pr_query, con=engine)
if not pr_all.empty:
pr_all = pd.concat([pr_all, pr_a])
else:
# first repo
pr_all = pr_a
display(pr_all.head())
pr_all.dtypes
###Output
_____no_output_____
###Markdown
Begin data pre-processing and adding columns Data type changing
###Code
# change count columns from float datatype to integer
pr_all[['assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count',
'comment_count',
'commit_count',
'file_count',
'lines_added',
'lines_removed'
]] = pr_all[['assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count',
'comment_count',
'commit_count',
'file_count',
'lines_added',
'lines_removed'
]].astype(float)
# Change years to int so that doesn't display as 2019.0 for example
pr_all[[
'created_year',
'closed_year']] = pr_all[['created_year',
'closed_year']].fillna(-1).astype(int).astype(str)
pr_all.dtypes
print(pr_all['repo_name'].unique())
###Output
_____no_output_____
###Markdown
Add `average_days_between_responses` and `average_hours_between_responses` columns
###Code
# Get days for average_time_between_responses time delta
pr_all['average_days_between_responses'] = pr_all['average_time_between_responses'].map(lambda x: x.days).astype(float)
pr_all['average_hours_between_responses'] = pr_all['average_time_between_responses'].map(lambda x: x.days * 24).astype(float)
pr_all.head()
###Output
_____no_output_____
###Markdown
Date filtering entire dataframe
###Code
start_date = pd.to_datetime(begin_date)
# end_date = pd.to_datetime('2020-02-01 09:00:00')
end_date = pd.to_datetime(end_date)
pr_all = pr_all[(pr_all['pr_created_at'] > start_date) & (pr_all['pr_closed_at'] < end_date)]
pr_all['created_year'] = pr_all['created_year'].map(int)
pr_all['created_month'] = pr_all['created_month'].map(int)
pr_all['created_month'] = pr_all['created_month'].map(lambda x: '{0:0>2}'.format(x))
pr_all['created_yearmonth'] = pd.to_datetime(pr_all['created_year'].map(str) + '-' + pr_all['created_month'].map(str) + '-01')
pr_all.head(1)
###Output
_____no_output_____
###Markdown
add `days_to_close` column for pull requests that are still open (closed pull requests already have this column filled from the query)Note: there will be no pull requests that are still open in the dataframe if you filtered by an end date in the above cell
###Code
import datetime
# getting the number of days of (today - created at) for the PRs that are still open
# and putting this in the days_to_close column
# get timedeltas of creation time to todays date/time
days_to_close_open_pr = datetime.datetime.now() - pr_all.loc[pr_all['pr_src_state'] == 'open']['pr_created_at']
# get num days from above timedelta
days_to_close_open_pr = days_to_close_open_pr.apply(lambda x: x.days).astype(int)
# for only OPEN pr's, set the days_to_close column equal to above dataframe
pr_all.loc[pr_all['pr_src_state'] == 'open'] = pr_all.loc[pr_all['pr_src_state'] == 'open'].assign(days_to_close=days_to_close_open_pr)
pr_all.loc[pr_all['pr_src_state'] == 'open'].head()
###Output
_____no_output_____
###Markdown
Add `closed_yearmonth` column for only CLOSED pull requests
###Code
# initiate column by setting all null datetimes
pr_all['closed_yearmonth'] = pd.to_datetime(np.nan)
# Fill column with prettified string of year/month closed that looks like: 2019-07-01
pr_all.loc[pr_all['pr_src_state'] == 'closed'] = pr_all.loc[pr_all['pr_src_state'] == 'closed'].assign(
closed_yearmonth = pd.to_datetime(pr_all.loc[pr_all['pr_src_state'] == 'closed']['closed_year'].astype(int
).map(str) + '-' + pr_all.loc[pr_all['pr_src_state'] == 'closed']['closed_month'].astype(int).map(str) + '-01'))
pr_all.loc[pr_all['pr_src_state'] == 'closed']
###Output
_____no_output_____
###Markdown
Add `merged_flag` column which is just prettified strings based off of if the `pr_merged_at` column is null or not
###Code
""" Merged flag """
if 'pr_merged_at' in pr_all.columns.values:
pr_all['pr_merged_at'] = pr_all['pr_merged_at'].fillna(0)
pr_all['merged_flag'] = 'Not Merged / Rejected'
pr_all['merged_flag'].loc[pr_all['pr_merged_at'] != 0] = 'Merged / Accepted'
pr_all['merged_flag'].loc[pr_all['pr_src_state'] == 'open'] = 'Still Open'
del pr_all['pr_merged_at']
pr_all['merged_flag']
###Output
_____no_output_____
###Markdown
Split into different dataframes All, open, closed, and slowest 20% of these 3 categories (6 dataframes total)
###Code
# Isolate the different state PRs for now
pr_open = pr_all.loc[pr_all['pr_src_state'] == 'open']
pr_closed = pr_all.loc[pr_all['pr_src_state'] == 'closed']
pr_merged = pr_all.loc[pr_all['merged_flag'] == 'Merged / Accepted']
pr_not_merged = pr_all.loc[pr_all['merged_flag'] == 'Not Merged / Rejected']
pr_closed['merged_flag']
###Output
_____no_output_____
###Markdown
Create dataframes that contain the slowest 20% pull requests of each group
###Code
# Filtering the 80th percentile slowest PRs
def filter_20_per_slowest(input_df):
pr_slow20_filtered = pd.DataFrame()
pr_slow20_x = pd.DataFrame()
for value in repo_set:
if not pr_slow20_filtered.empty:
pr_slow20x = input_df.query('repo_id==@value')
pr_slow20x['percentile_rank_local'] = pr_slow20x.days_to_close.rank(pct=True)
pr_slow20x = pr_slow20x.query('percentile_rank_local >= .8', )
pr_slow20_filtered = pd.concat([pr_slow20x, pr_slow20_filtered])
reponame = str(value)
filename = ''.join(['output/pr_slowest20pct', reponame, '.csv'])
pr_slow20x.to_csv(filename)
else:
# first time
pr_slow20_filtered = input_df.copy()
pr_slow20_filtered['percentile_rank_local'] = pr_slow20_filtered.days_to_close.rank(pct=True)
pr_slow20_filtered = pr_slow20_filtered.query('percentile_rank_local >= .8', )
# print(pr_slow20_filtered.describe())
return pr_slow20_filtered
pr_slow20_open = filter_20_per_slowest(pr_open)
pr_slow20_closed = filter_20_per_slowest(pr_closed)
pr_slow20_merged = filter_20_per_slowest(pr_merged)
pr_slow20_not_merged = filter_20_per_slowest(pr_not_merged)
pr_slow20_all = filter_20_per_slowest(pr_all)
pr_slow20_merged#.head()
#create dictionairy with number as the key and a letter as the value
#this is used to alias repos when using 'compeitor' display grouping
letters = []
nums = []
alpha = 'a'
for i in range(0, 26):
letters.append(alpha)
alpha = chr(ord(alpha) + 1)
nums.append(i)
letters = [x.upper() for x in letters]
#create dict out of list of numbers and letters
repo_alias_dict = {nums[i]: letters[i] for i in range(len(nums))}
# create dict in the form {repo_id : repo_name}
aliased_repos = []
repo_dict = {}
count = 0
for repo_id in repo_set:
#find corresponding repo name from each repo_id
repo_name = pr_all.loc[pr_all['repo_id'] == repo_id].iloc[0]['repo_name']
#if competitor grouping is enabled turn all repo names, other than the ones in the 'not_aliased_repos' into an alias
if display_grouping == 'competitors' and not repo_id in not_aliased_repos:
repo_name = 'Repo ' + repo_alias_dict[count]
#add repo_id to list of aliased repos, this is used for ordering
aliased_repos.append(repo_id)
count += 1
#add repo_id and repo names as key value pairs into a dict, this is used to label the title of the visualizations
repo_dict.update({repo_id : repo_name})
#gurantees that the non_aliased repos come first when display grouping is set as 'competitors'
repo_list = not_aliased_repos + aliased_repos
###Output
_____no_output_____
###Markdown
Start Visualization Methods
###Code
from bokeh.palettes import Colorblind, mpl, Category20
from bokeh.layouts import gridplot
from bokeh.models.annotations import Title
from bokeh.io import export_png
from bokeh.io import show, output_notebook
from bokeh.models import ColumnDataSource, Legend, LabelSet, Range1d, LinearAxis, Label
from bokeh.plotting import figure
from bokeh.models.glyphs import Rect
from bokeh.transform import dodge
try:
colors = Colorblind[len(repo_set)]
except:
colors = Colorblind[3]
#mpl['Plasma'][len(repo_set)]
#['A6CEE3','B2DF8A','33A02C','FB9A99']
def remove_outliers(input_df, field, num_outliers_repo_map):
df_no_outliers = input_df.copy()
for repo_name, num_outliers in num_outliers_repo_map.items():
indices_to_drop = input_df.loc[input_df['repo_name'] == repo_name].nlargest(num_outliers, field).index
df_no_outliers = df_no_outliers.drop(index=indices_to_drop)
return df_no_outliers
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
import datetime as dt
def visualize_mean_days_to_close(input_df, x_axis='closed_yearmonth', description='Closed', save_file=False, num_remove_outliers=0, drop_outliers_repo=None):
# Set the df you want to build the viz's for
driver_df = input_df.copy()
driver_df = driver_df[['repo_id', 'repo_name', 'pr_src_id', 'created_yearmonth', 'closed_yearmonth', 'days_to_close']]
if save_file:
driver_df.to_csv('output/c.westw20small {}.csv'.format(description))
driver_df_mean = driver_df.groupby(['repo_id', x_axis, 'repo_name'],as_index=False).mean()
# Total PRS Closed
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(16, 8)
plotter = sns.lineplot(x=x_axis, y='days_to_close', style='repo_name', data=driver_df_mean, sort=True, legend='full', linewidth=2.5, hue='repo_name').set_title("Average Days to Close of {} Pull Requests, July 2017-January 2020".format(description))
if save_file:
fig.savefig('images/slow_20_mean {}.png'.format(description))
# Copying array and deleting the outlier in the copy to re-visualize
def drop_n_largest(input_df, n, repo_name):
input_df_copy = input_df.copy()
indices_to_drop = input_df.loc[input_df['repo_name'] == 'amazon-freertos'].nlargest(n,'days_to_close').index
print("Indices to drop: {}".format(indices_to_drop))
input_df_copy = input_df_copy.drop(index=indices_to_drop)
input_df_copy.loc[input_df['repo_name'] == repo_name]
return input_df_copy
if num_remove_outliers > 0 and drop_outliers_repo:
driver_df_mean_no_outliers = drop_n_largest(driver_df_mean, num_remove_outliers, drop_outliers_repo)
# Total PRS Closed without outlier
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(16, 8)
plotter = sns.lineplot(x=x_axis, y='days_to_close', style='repo_name', data=driver_df_mean_no_outliers, sort=False, legend='full', linewidth=2.5, hue='repo_name').set_title("Average Days to Close among {} Pull Requests Without Outlier, July 2017-January 2020".format(description))
plotterlabels = ax.set_xticklabels(driver_df_mean_no_outliers[x_axis], rotation=90, fontsize=8)
if save_file:
fig.savefig('images/slow_20_mean_no_outlier {}.png'.format(description))
#visualize_mean_days_to_close(pr_closed, description='All Closed', save_file=False)
from bokeh.models import ColumnDataSource, FactorRange
from bokeh.transform import factor_cmap
def vertical_grouped_bar(input_df, repo_id, group_by = 'merged_flag', x_axis='closed_year', y_axis='num_commits', description='All', title="{}Average Commit Counts Per Year for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook() # let bokeh display plot in jupyter cell output
driver_df = input_df.copy() # deep copy input data so we do not change the external dataframe
# Filter df by passed *repo_id* param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
# Change closed year to int so that doesn't display as 2019.0 for example
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
# contains the closed years
x_groups = sorted(list(driver_df[x_axis].unique()))
# inner groups on x_axis they are merged and not_merged
groups = list(driver_df[group_by].unique())
# setup color pallete
try:
colors = mpl['Plasma'][len(groups)]
except:
colors = [mpl['Plasma'][3][0]] + [mpl['Plasma'][3][1]]
merged_avg_values = list(driver_df.loc[driver_df[group_by] == 'Merged / Accepted'].groupby([x_axis],as_index=False).mean().round(1)['commit_count'])
not_merged_avg_values = list(driver_df.loc[driver_df[group_by] == 'Not Merged / Rejected'].groupby([x_axis],as_index=False).mean().round(1)['commit_count'])
# Setup data in format for grouped bar chart
data = {
'years' : x_groups,
'Merged / Accepted' : merged_avg_values,
'Not Merged / Rejected' : not_merged_avg_values,
}
x = [ (year, pr_state) for year in x_groups for pr_state in groups ]
counts = sum(zip(data['Merged / Accepted'], data['Not Merged / Rejected']), ())
source = ColumnDataSource(data=dict(x=x, counts=counts))
title_beginning = '{}: '.format(repo_dict[repo_id])
title=title.format(title_beginning, description)
plot_width = len(x_groups) * 300
title_text_font_size = 16
if (len(title) * title_text_font_size / 2) > plot_width:
plot_width = int(len(title) * title_text_font_size / 2) + 40
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=plot_width, title=title, y_range=(0, max(merged_avg_values + not_merged_avg_values)*1.15), toolbar_location=None)
# Vertical bar glyph
p.vbar(x='x', top='counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2))
# Data label
labels = LabelSet(x='x', y='counts', text='counts',# y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="black",
source=source, text_align='center')
p.add_layout(labels)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.yaxis.axis_label = 'Average Commits / Pull Request'
p.xaxis.axis_label = 'Year Closed'
p.title.align = "center"
p.title.text_font_size = "{}px".format(title_text_font_size)
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "15px"
p.yaxis.axis_label_text_font_size = "15px"
p.yaxis.major_label_text_font_size = "15px"
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average commits per pull requests over an entire year, for merged and not merged pull requests."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
#show(p)
if save_files:
export_png(grid, filename="./images/v_grouped_bar/v_grouped_bar__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#vertical_grouped_bar(pr_all, repo_id=repo_list)
def vertical_grouped_bar_line_counts(input_df, repo_id, x_axis='closed_year', y_max1=600000, y_max2=1000, description="", title ="", save_file=False):
output_notebook() # let bokeh display plot in jupyter cell output
driver_df = input_df.copy() # deep copy input data so we do not change the external dataframe
# Filter df by passed *repo_id* param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
# Change closed year to int so that doesn't display as 2019.0 for example
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
# contains the closed years
x_groups = sorted(list(driver_df[x_axis].unique()))
groups = ['Lines Added', 'Lines Removed', 'Files Changed']
# setup color pallete
colors = mpl['Plasma'][3]
display(pr_all[pr_all['lines_added'].notna()])#.groupby([x_axis],as_index=False).mean())
files_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['file_count'])
added_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['lines_added'])
removed_avg_values = list(driver_df.groupby([x_axis],as_index=False).mean().round(1)['lines_removed'])
display(driver_df.groupby([x_axis],as_index=False).mean())
print(files_avg_values)
print(added_avg_values)
print(removed_avg_values)
# Setup data in format for grouped bar chart
data = {
'years' : x_groups,
'Lines Added' : added_avg_values,
'Lines Removed' : removed_avg_values,
'Files Changed' : files_avg_values
}
x = [ (year, pr_state) for year in x_groups for pr_state in groups ]
line_counts = sum(zip(data['Lines Added'], data['Lines Removed'], [0]*len(x_groups)), ())
file_counts = sum(zip([0]*len(x_groups),[0]*len(x_groups),data['Files Changed']), ())
print(line_counts)
print(file_counts)
source = ColumnDataSource(data=dict(x=x, line_counts=line_counts, file_counts=file_counts))
if y_max1:
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=700, title=title.format(description), y_range=(0,y_max1))
else:
p = figure(x_range=FactorRange(*x), plot_height=450, plot_width=700, title=title.format(description))
# Setting the second y axis range name and range
p.extra_y_ranges = {"file_counts": Range1d(start=0, end=y_max2)}
# Adding the second axis to the plot.
p.add_layout(LinearAxis(y_range_name="file_counts"), 'right')
# Data label for line counts
labels = LabelSet(x='x', y='line_counts', text='line_counts',y_offset=8,# x_offset=34,
text_font_size="10pt", text_color="black",
source=source, text_align='center')
p.add_layout(labels)
# Vertical bar glyph for line counts
p.vbar(x='x', top='line_counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2))
# Data label for file counts
labels = LabelSet(x='x', y='file_counts', text='file_counts', y_offset=0, #x_offset=34,
text_font_size="10pt", text_color="black",
source=source, text_align='center', y_range_name="file_counts")
p.add_layout(labels)
# Vertical bar glyph for file counts
p.vbar(x='x', top='file_counts', width=0.9, source=source, line_color="white",
fill_color=factor_cmap('x', palette=colors, factors=groups, start=1, end=2), y_range_name="file_counts")
p.left[0].formatter.use_scientific = False
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
p.yaxis.axis_label = 'Average Commits / Pull Request'
p.xaxis.axis_label = 'Year Closed'
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
show(p)
if save_files:
export_png(p, filename="./images/v_grouped_bar/v_grouped_bar__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
""" THIS VIZ IS NOT READY YET , BUT UNCOMMENT LINE BELOW IF YOU WANT TO SEE"""
# vertical_grouped_bar_line_counts(pr_all, description='All', title="Average Size Metrics Per Year for {} Merged Pull Requests in Master", save_file=False, y_max1=580000, y_max2=1100)
None
def horizontal_stacked_bar(input_df, repo_id, group_by='merged_flag', x_axis='comment_count', description="All Closed", y_axis='closed_year', title="Mean Comments for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
driver_df = input_df.copy()
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
output_notebook()
try:
y_groups = sorted(list(driver_df[y_axis].unique()))
except:
y_groups = [repo_id]
groups = driver_df[group_by].unique()
try:
colors = mpl['Plasma'][len(groups)]
except:
colors = [mpl['Plasma'][3][0]] + [mpl['Plasma'][3][1]]
len_not_merged = len(driver_df.loc[driver_df['merged_flag'] == 'Not Merged / Rejected'])
len_merged = len(driver_df.loc[driver_df['merged_flag'] == 'Merged / Accepted'])
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 650
p = figure(y_range=y_groups, plot_height=450, plot_width=plot_width, # y_range=y_groups,#(pr_all[y_axis].min(),pr_all[y_axis].max()) #y_axis_type="datetime",
title='{} {}'.format(title_beginning, title.format(description)), toolbar_location=None)
possible_maximums= []
for y_value in y_groups:
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
if len(y_merged_data) > 0:
y_merged_data[x_axis + '_mean'] = y_merged_data[x_axis].mean().round(1)
else:
y_merged_data[x_axis + '_mean'] = 0.00
if len(y_not_merged_data) > 0:
y_not_merged_data[x_axis + '_mean'] = y_not_merged_data[x_axis].mean().round(1)
else:
y_not_merged_data[x_axis + '_mean'] = 0
not_merged_source = ColumnDataSource(y_not_merged_data)
merged_source = ColumnDataSource(y_merged_data)
possible_maximums.append(max(y_not_merged_data[x_axis + '_mean']))
possible_maximums.append(max(y_merged_data[x_axis + '_mean']))
# mean comment count for merged
merged_comment_count_glyph = p.hbar(y=dodge(y_axis, -0.1, range=p.y_range), left=0, right=x_axis + '_mean', height=0.04*len(driver_df[y_axis].unique()),
source=merged_source, fill_color="black")#,legend_label="Mean Days to Close",
# Data label
labels = LabelSet(x=x_axis + '_mean', y=dodge(y_axis, -0.1, range=p.y_range), text=x_axis + '_mean', y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="black",
source=merged_source, text_align='center')
p.add_layout(labels)
# mean comment count For nonmerged
not_merged_comment_count_glyph = p.hbar(y=dodge(y_axis, 0.1, range=p.y_range), left=0, right=x_axis + '_mean',
height=0.04*len(driver_df[y_axis].unique()), source=not_merged_source, fill_color="#e84d60")#legend_label="Mean Days to Close",
# Data label
labels = LabelSet(x=x_axis + '_mean', y=dodge(y_axis, 0.1, range=p.y_range), text=x_axis + '_mean', y_offset=-8, x_offset=34,
text_font_size="12pt", text_color="#e84d60",
source=not_merged_source, text_align='center')
p.add_layout(labels)
# p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "bottom_right"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.xaxis.axis_label = 'Average Comments / Pull Request'
p.yaxis.axis_label = 'Repository' if y_axis == 'repo_name' else 'Year Closed' if y_axis == 'closed_year' else ''
legend = Legend(
items=[
("Merged Pull Request Mean Comment Count", [merged_comment_count_glyph]),
("Rejected Pull Request Mean Comment Count", [not_merged_comment_count_glyph])
],
location='center',
orientation='vertical',
border_line_color="black"
)
p.add_layout(legend, "below")
p.title.text_font_size = "16px"
p.title.align = "center"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.x_range = Range1d(0, max(possible_maximums)*1.15)
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average number of comments per merged or not merged pull request."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
#show(p, plot_width=1200, plot_height=300*len(y_groups) + 300)
if save_files:
repo_extension = 'All' if not repo_name else repo_name
export_png(grid, filename="./images/h_stacked_bar_mean_comments_merged_status/mean_comments_merged_status__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#horizontal_stacked_bar(pr_closed, repo_id=repo_list)
def merged_ratio_vertical_grouped_bar(data_dict, repo_id, x_axis='closed_year', description="All Closed", title="Count of {} Pull Requests by Merged Status"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
colors = mpl['Plasma'][6]
#if repo_name == 'mbed-os':
#colors = colors[::-1]
for data_desc, input_df in data_dict.items():
x_groups = sorted(list(input_df[x_axis].astype(str).unique()))
break
plot_width = 315 * len(x_groups)
title_beginning = repo_dict[repo_id]
p = figure(x_range=x_groups, plot_height=350, plot_width=plot_width,
title='{}: {}'.format(title_beginning, title.format(description)), toolbar_location=None)
dodge_amount = 0.12
color_index = 0
x_offset = 50
all_totals = []
for data_desc, input_df in data_dict.items():
driver_df = input_df.copy()
driver_df[x_axis] = driver_df[x_axis].astype(str)
groups = sorted(list(driver_df['merged_flag'].unique()))
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
len_merged = []
zeros = []
len_not_merged = []
totals = []
for x_group in x_groups:
len_merged_entry = len(driver_df.loc[(driver_df['merged_flag'] == 'Merged / Accepted') & (driver_df[x_axis] == x_group)])
totals += [len(driver_df.loc[(driver_df['merged_flag'] == 'Not Merged / Rejected') & (driver_df[x_axis] == x_group)]) + len_merged_entry]
len_not_merged += [len(driver_df.loc[(driver_df['merged_flag'] == 'Not Merged / Rejected') & (driver_df[x_axis] == x_group)])]
len_merged += [len_merged_entry]
zeros.append(0)
data = {'X': x_groups}
for group in groups:
data[group] = []
for x_group in x_groups:
data[group] += [len(driver_df.loc[(driver_df['merged_flag'] == group) & (driver_df[x_axis] == x_group)])]
data['len_merged'] = len_merged
data['len_not_merged'] = len_not_merged
data['totals'] = totals
data['zeros'] = zeros
if data_desc == "All":
all_totals = totals
source = ColumnDataSource(data)
stacked_bar = p.vbar_stack(groups, x=dodge('X', dodge_amount, range=p.x_range), width=0.2, source=source, color=colors[1:3], legend_label=[f"{data_desc} " + "%s" % x for x in groups])
# Data label for merged
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='zeros', text='len_merged', y_offset=2, x_offset=x_offset,
text_font_size="12pt", text_color=colors[1:3][0],
source=source, text_align='center')
)
if min(data['totals']) < 400:
y_offset = 15
else:
y_offset = 0
# Data label for not merged
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='totals', text='len_not_merged', y_offset=y_offset, x_offset=x_offset,
text_font_size="12pt", text_color=colors[1:3][1],
source=source, text_align='center')
)
# Data label for total
p.add_layout(
LabelSet(x=dodge('X', dodge_amount, range=p.x_range), y='totals', text='totals', y_offset=0, x_offset=0,
text_font_size="12pt", text_color='black',
source=source, text_align='center')
)
dodge_amount *= -1
colors = colors[::-1]
x_offset *= -1
p.y_range = Range1d(0, max(all_totals)*1.4)
p.xgrid.grid_line_color = None
p.legend.location = "top_center"
p.legend.orientation="horizontal"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.yaxis.axis_label = 'Count of Pull Requests'
p.xaxis.axis_label = 'Repository' if x_axis == 'repo_name' else 'Year Closed' if x_axis == 'closed_year' else ''
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.outline_line_color = None
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the number of closed pull requests per year in four different categories. These four categories are All Merged, All Not Merged, Slowest 20% Merged, and Slowest 20% Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/v_stacked_bar_merged_status_count/stacked_bar_merged_status_count__{}_PRs__xaxis_{}__repo_{}.png".format(description, x_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#erged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
def visualize_mean_response_times(input_df, repo_id, time_unit='days', x_max=95, y_axis='closed_year', description="All Closed", legend_position=(410, 10)):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook() # let bokeh show plot in jupyter cell output
driver_df = input_df.copy()[['repo_name', 'repo_id', 'merged_flag', y_axis, time_unit + '_to_first_response', time_unit + '_to_last_response',
time_unit + '_to_close']] # deep copy input data so we do not alter the external dataframe
# filter by repo_id param
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 950
p = figure(toolbar_location=None, y_range=sorted(driver_df[y_axis].unique()), plot_width=plot_width,
plot_height=450,#75*len(driver_df[y_axis].unique()),
title="{}Mean Response Times for Pull Requests {}".format(title_beginning, description))
first_response_glyphs = []
last_response_glyphs = []
merged_days_to_close_glyphs = []
not_merged_days_to_close_glyphs = []
possible_maximums = []
for y_value in driver_df[y_axis].unique():
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
y_merged_data[time_unit + '_to_first_response_mean'] = y_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_last_response_mean'] = y_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_close_mean'] = y_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_first_response_mean'] = y_not_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_last_response_mean'] = y_not_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_close_mean'] = y_not_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
possible_maximums.append(max(y_merged_data[time_unit + '_to_close_mean']))
possible_maximums.append(max(y_not_merged_data[time_unit + '_to_close_mean']))
maximum = max(possible_maximums)*1.15
ideal_difference = maximum*0.064
for y_value in driver_df[y_axis].unique():
y_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Merged / Accepted')]
y_not_merged_data = driver_df.loc[(driver_df[y_axis] == y_value) & (driver_df['merged_flag'] == 'Not Merged / Rejected')]
y_merged_data[time_unit + '_to_first_response_mean'] = y_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_last_response_mean'] = y_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_merged_data[time_unit + '_to_close_mean'] = y_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_first_response_mean'] = y_not_merged_data[time_unit + '_to_first_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_last_response_mean'] = y_not_merged_data[time_unit + '_to_last_response'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
y_not_merged_data[time_unit + '_to_close_mean'] = y_not_merged_data[time_unit + '_to_close'].mean().round(1) if len(y_not_merged_data) > 0 else 0.00
not_merged_source = ColumnDataSource(y_not_merged_data)
merged_source = ColumnDataSource(y_merged_data)
# mean PR length for merged
merged_days_to_close_glyph = p.hbar(y=dodge(y_axis, -0.1, range=p.y_range), left=0, right=time_unit + '_to_close_mean', height=0.04*len(driver_df[y_axis].unique()),
source=merged_source, fill_color="black")#,legend_label="Mean Days to Close",
merged_days_to_close_glyphs.append(merged_days_to_close_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_close_mean', y=dodge(y_axis, -0.1, range=p.y_range), text=time_unit + '_to_close_mean', y_offset=-8, x_offset=34, #34
text_font_size="12pt", text_color="black",
source=merged_source, text_align='center')
p.add_layout(labels)
# mean PR length For nonmerged
not_merged_days_to_close_glyph = p.hbar(y=dodge(y_axis, 0.1, range=p.y_range), left=0, right=time_unit + '_to_close_mean',
height=0.04*len(driver_df[y_axis].unique()), source=not_merged_source, fill_color="#e84d60")#legend_label="Mean Days to Close",
not_merged_days_to_close_glyphs.append(not_merged_days_to_close_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_close_mean', y=dodge(y_axis, 0.1, range=p.y_range), text=time_unit + '_to_close_mean', y_offset=-8, x_offset=44,
text_font_size="12pt", text_color="#e84d60",
source=not_merged_source, text_align='center')
p.add_layout(labels)
#if the difference between two values is less than 6.4 percent move the second one to the right 30 pixels
if (max(y_merged_data[time_unit + '_to_last_response_mean']) - max(y_merged_data[time_unit + '_to_first_response_mean'])) < ideal_difference:
merged_x_offset = 30
else:
merged_x_offset = 0
#if the difference between two values is less than 6.4 percent move the second one to the right 30 pixels
if (max(y_not_merged_data[time_unit + '_to_last_response_mean']) - max(y_not_merged_data[time_unit + '_to_first_response_mean'])) < ideal_difference:
not_merged_x_offset = 30
else:
not_merged_x_offset = 0
#if there is only one bar set the y_offsets so the labels will not overlap the bars
if len(driver_df[y_axis].unique()) == 1:
merged_y_offset = -65
not_merged_y_offset = 45
else:
merged_y_offset = -45
not_merged_y_offset = 25
# mean time to first response
glyph = Rect(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, -0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[0])
first_response_glyph = p.add_glyph(merged_source, glyph)
first_response_glyphs.append(first_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0, range=p.y_range),text=time_unit + '_to_first_response_mean',x_offset = 0, y_offset=merged_y_offset,#-60,
text_font_size="12pt", text_color=colors[0],
source=merged_source, text_align='center')
p.add_layout(labels)
#for nonmerged
glyph = Rect(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[0])
first_response_glyph = p.add_glyph(not_merged_source, glyph)
first_response_glyphs.append(first_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_first_response_mean', y=dodge(y_axis, 0, range=p.y_range),text=time_unit + '_to_first_response_mean',x_offset = 0, y_offset=not_merged_y_offset,#40,
text_font_size="12pt", text_color=colors[0],
source=not_merged_source, text_align='center')
p.add_layout(labels)
# mean time to last response
glyph = Rect(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, -0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[1])
last_response_glyph = p.add_glyph(merged_source, glyph)
last_response_glyphs.append(last_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0, range=p.y_range), text=time_unit + '_to_last_response_mean', x_offset=merged_x_offset, y_offset=merged_y_offset,#-60,
text_font_size="12pt", text_color=colors[1],
source=merged_source, text_align='center')
p.add_layout(labels)
#for nonmerged
glyph = Rect(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0.1, range=p.y_range), width=x_max/100, height=0.08*len(driver_df[y_axis].unique()), fill_color=colors[1])
last_response_glyph = p.add_glyph(not_merged_source, glyph)
last_response_glyphs.append(last_response_glyph)
# Data label
labels = LabelSet(x=time_unit + '_to_last_response_mean', y=dodge(y_axis, 0, range=p.y_range), text=time_unit + '_to_last_response_mean', x_offset = not_merged_x_offset, y_offset=not_merged_y_offset,#40,
text_font_size="12pt", text_color=colors[1],
source=not_merged_source, text_align='center')
p.add_layout(labels)
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label = "Days to Close"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
#adjust the starting point and ending point based on the maximum of maximum of the graph
p.x_range = Range1d(maximum/30 * -1, maximum*1.15)
p.yaxis.axis_label = "Repository" if y_axis == 'repo_name' else 'Year Closed' if y_axis == 'closed_year' else ''
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.ygrid.grid_line_color = None
p.y_range.range_padding = 0.15
p.outline_line_color = None
p.toolbar.logo = None
p.toolbar_location = None
def add_legend(location, orientation, side):
legend = Legend(
items=[
("Mean Days to First Response", first_response_glyphs),
("Mean Days to Last Response", last_response_glyphs),
("Merged Mean Days to Close", merged_days_to_close_glyphs),
("Not Merged Mean Days to Close", not_merged_days_to_close_glyphs)
],
location=location,
orientation=orientation,
border_line_color="black"
# title='Example Title'
)
p.add_layout(legend, side)
# add_legend((150, 50), "horizontal", "center")
add_legend(legend_position, "vertical", "right")
plot = p
p = figure(width = plot_width, height = 200, margin = (0, 0, 0, 0))
caption = "Caption Here"
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
export_png(grid, filename="./images/hbar_response_times/mean_response_times__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
# for repo_name in pr_closed['repo_name'].unique():
#visualize_mean_response_times(pr_closed, repo_id=repo_list, legend_position='center')
def visualize_mean_time_between_responses(data_dict, repo_id, time_unit='Days', x_axis='closed_yearmonth', description="All Closed", line_group='merged_flag', y_axis='average_days_between_responses', num_outliers_repo_map={}):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
plot_width = 950
p1 = figure(x_axis_type="datetime", title="{}: Mean {} Between Comments by Month Closed for {} Pull Requests".format(repo_dict[repo_id], time_unit, description), plot_width=plot_width, x_range=(pr_all[x_axis].min(),pr_all[x_axis].max()), plot_height=500, toolbar_location=None)
colors = Category20[10][6:]
color_index = 0
glyphs = []
possible_maximums = []
for data_desc, input_df in data_dict.items():
driver_df = input_df.copy()
driver_df = remove_outliers(driver_df, y_axis, num_outliers_repo_map)
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
index = 0
driver_df_mean = driver_df.groupby(['repo_id', line_group, x_axis],as_index=False).mean()
title_ending = ''
if repo_id:
title_ending += ' for Repo: {}'.format(repo_id)
for group_num, line_group_value in enumerate(driver_df[line_group].unique(), color_index):
glyphs.append(p1.line(driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][x_axis], driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][y_axis], color=colors[group_num], line_width = 3))
color_index += 1
possible_maximums.append(max(driver_df_mean.loc[driver_df_mean[line_group] == line_group_value][y_axis].dropna()))
for repo, num_outliers in num_outliers_repo_map.items():
if repo_name == repo:
p1.add_layout(Title(text="** {} outliers for {} were removed".format(num_outliers, repo), align="center"), "below")
p1.grid.grid_line_alpha = 0.3
p1.xaxis.axis_label = 'Month Closed'
p1.xaxis.ticker.desired_num_ticks = 15
p1.yaxis.axis_label = 'Mean {} Between Responses'.format(time_unit)
p1.legend.location = "top_left"
legend = Legend(
items=[
("All Not Merged / Rejected", [glyphs[0]]),
("All Merged / Accepted", [glyphs[1]]),
("Slowest 20% Not Merged / Rejected", [glyphs[2]]),
("Slowest 20% Merged / Accepted", [glyphs[3]])
],
location='center_right',
orientation='vertical',
border_line_color="black"
)
p1.add_layout(legend, 'right')
p1.title.text_font_size = "16px"
p1.xaxis.axis_label_text_font_size = "16px"
p1.xaxis.major_label_text_font_size = "16px"
p1.yaxis.axis_label_text_font_size = "16px"
p1.yaxis.major_label_text_font_size = "16px"
p1.xaxis.major_label_orientation = 45.0
p1.y_range = Range1d(0, max(possible_maximums)*1.15)
plot = p1
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the average number of days between comments for all closed pull requests per month in four categories. These four categories are All Merged, All Not Merged, Slowest 20% Merged, and Slowest 20% Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_name else repo_name
export_png(grid, filename="./images/line_mean_time_between_comments/line_mean_time_between_comments__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
def visualize_time_to_first_comment(input_df, repo_id, x_axis='pr_closed_at', y_axis='days_to_first_response', description='All', num_outliers_repo_map={}, group_by='merged_flag', same_scales=True, columns=2, legend_position='top_right', remove_outliers = 0):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
output_notebook()
driver_df = input_df.copy()
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
group_by_groups = sorted(driver_df[group_by].unique())
seconds = ((driver_df[x_axis].max() + datetime.timedelta(days=25))- (driver_df[x_axis].min() - datetime.timedelta(days=30))).total_seconds()
quarter_years = seconds / 10506240
quarter_years = round(quarter_years)
title_beginning = '{}: '.format(repo_dict[repo_id])
plot_width = 180 * 5
p = figure(x_range=(driver_df[x_axis].min() - datetime.timedelta(days=30), driver_df[x_axis].max() + datetime.timedelta(days=25)),
#(driver_df[y_axis].min(), driver_df[y_axis].max()),
toolbar_location=None,
title='{}Days to First Response for {} Closed Pull Requests'.format(title_beginning, description), plot_width=plot_width,
plot_height=400, x_axis_type='datetime')
for index, group_by_group in enumerate(group_by_groups):
p.scatter(x_axis, y_axis, color=colors[index], marker="square", source=driver_df.loc[driver_df[group_by] == group_by_group], legend_label=group_by_group)
if group_by_group == "Merged / Accepted":
merged_values = driver_df.loc[driver_df[group_by] == group_by_group][y_axis].dropna().values.tolist()
else:
not_merged_values = driver_df.loc[driver_df[group_by] == group_by_group][y_axis].dropna().values.tolist()
values = not_merged_values + merged_values
#values.fillna(0)
for value in range(0, remove_outliers):
values.remove(max(values))
#determine y_max by finding the max of the values and scaling it up a small amoutn
y_max = max(values)*1.0111
outliers = driver_df.loc[driver_df[y_axis] > y_max]
if len(outliers) > 0:
if repo_id:
p.add_layout(Title(text="** Outliers cut off at {} days: {} outlier(s) for {} were removed **".format(y_max, len(outliers), repo_name), align="center"), "below")
else:
p.add_layout(Title(text="** Outliers cut off at {} days: {} outlier(s) were removed **".format(y_max, len(outliers)), align="center"), "below")
p.xaxis.axis_label = 'Date Closed' if x_axis == 'pr_closed_at' else 'Date Created' if x_axis == 'pr_created_at' else 'Date'
p.yaxis.axis_label = 'Days to First Response'
p.legend.location = legend_position
p.title.align = "center"
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
p.y_range = Range1d(0, y_max)
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "This graph shows the days to first reponse for individual pull requests, either Merged or Not Merged."
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = repo_ids
export_png(grid, filename="./images/first_comment_times/scatter_first_comment_times__{}_PRs__xaxis_{}__repo_{}.png".format(description, x_axis, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#visualize_time_to_first_comment(pr_closed, repo_id= repo_list, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
def hex_to_RGB(hex):
''' "#FFFFFF" -> [255,255,255] '''
# Pass 16 to the integer function for change of base
return [int(hex[i:i+2], 16) for i in range(1,6,2)]
def color_dict(gradient):
''' Takes in a list of RGB sub-lists and returns dictionary of
colors in RGB and hex form for use in a graphing function
defined later on '''
return {"hex":[RGB_to_hex(RGB) for RGB in gradient],
"r":[RGB[0] for RGB in gradient],
"g":[RGB[1] for RGB in gradient],
"b":[RGB[2] for RGB in gradient]}
def RGB_to_hex(RGB):
''' [255,255,255] -> "#FFFFFF" '''
# Components need to be integers for hex to make sense
RGB = [int(x) for x in RGB]
return "#"+"".join(["0{0:x}".format(v) if v < 16 else
"{0:x}".format(v) for v in RGB])
def linear_gradient(start_hex, finish_hex="#FFFFFF", n=10):
''' returns a gradient list of (n) colors between
two hex colors. start_hex and finish_hex
should be the full six-digit color string,
inlcuding the number sign ("#FFFFFF") '''
# Starting and ending colors in RGB form
s = hex_to_RGB(start_hex)
f = hex_to_RGB(finish_hex)
# Initilize a list of the output colors with the starting color
RGB_list = [s]
# Calcuate a color at each evenly spaced value of t from 1 to n
for t in range(1, n):
# Interpolate RGB vector for color at the current value of t
curr_vector = [
int(s[j] + (float(t)/(n-1))*(f[j]-s[j]))
for j in range(3)
]
# Add it to our list of output colors
RGB_list.append(curr_vector)
return color_dict(RGB_list)
from bokeh.models import BasicTicker, ColorBar, LinearColorMapper, PrintfTickFormatter, LogTicker, Label
from bokeh.transform import transform
def events_types_heat_map(input_df, repo_id, include_comments=True, x_axis='closed_year', facet="merged_flag",columns=2, x_max=1100, same_scales=True, y_axis='repo_name', description="All Closed", title="Average Pull Request Event Types for {} Pull Requests"):
if type(repo_id) == type(repo_list):
repo_ids = repo_id
else:
repo_ids = [repo_id]
for repo_id in repo_ids:
colors = linear_gradient('#f5f5dc', '#fff44f', 150)['hex']
driver_df = input_df.copy()
driver_df[x_axis] = driver_df[x_axis].astype(str)
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
if facet == 'closed_year' or y_axis == 'closed_year':
driver_df['closed_year'] = driver_df['closed_year'].astype(int).astype(str)
optional_comments = ['comment_count'] if include_comments else []
driver_df = driver_df[['repo_id', 'repo_name',x_axis, 'assigned_count',
'review_requested_count',
'labeled_count',
'subscribed_count',
'mentioned_count',
'referenced_count',
'closed_count',
'head_ref_force_pushed_count',
'merged_count',
'milestoned_count',
'unlabeled_count',
'head_ref_deleted_count', facet ] + optional_comments]
y_groups = [
'review_requested_count',
'labeled_count',
'subscribed_count',
'referenced_count',
'closed_count',
# 'milestoned_count',
] + optional_comments
output_notebook()
optional_group_comments = ['comment'] if include_comments else []
# y_groups = ['subscribed', 'mentioned', 'labeled', 'review_requested', 'head_ref_force_pushed', 'referenced', 'closed', 'merged', 'unlabeled', 'head_ref_deleted', 'milestoned', 'assigned'] + optional_group_comments
x_groups = sorted(list(driver_df[x_axis].unique()))
grid_array = []
grid_row = []
for index, facet_group in enumerate(sorted(driver_df[facet].unique())):
facet_data = driver_df.loc[driver_df[facet] == facet_group]
# display(facet_data.sort_values('merged_count', ascending=False).head(50))
driver_df_mean = facet_data.groupby(['repo_id', 'repo_name', x_axis], as_index=False).mean().round(1)
# data = {'Y' : y_groups}
# for group in y_groups:
# data[group] = driver_df_mean[group].tolist()
plot_width = 700
p = figure(y_range=y_groups, plot_height=500, plot_width=plot_width, x_range=x_groups,
title='{}'.format(format(facet_group)))
for y_group in y_groups:
driver_df_mean['field'] = y_group
source = ColumnDataSource(driver_df_mean)
mapper = LinearColorMapper(palette=colors, low=driver_df_mean[y_group].min(), high=driver_df_mean[y_group].max())
p.rect(y='field', x=x_axis, width=1, height=1, source=source,
line_color=None, fill_color=transform(y_group, mapper))
# Data label
labels = LabelSet(x=x_axis, y='field', text=y_group, y_offset=-8,
text_font_size="12pt", text_color='black',
source=source, text_align='center')
p.add_layout(labels)
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=9),
formatter=PrintfTickFormatter(format="%d"))
# p.add_layout(color_bar, 'right')
p.y_range.range_padding = 0.1
p.ygrid.grid_line_color = None
p.legend.location = "bottom_right"
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.xaxis.axis_label = 'Year Closed'
p.yaxis.axis_label = 'Event Type'
p.title.align = "center"
p.title.text_font_size = "15px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "16px"
p.yaxis.axis_label_text_font_size = "16px"
p.yaxis.major_label_text_font_size = "16px"
grid_row.append(p)
if index % columns == columns - 1:
grid_array.append(grid_row)
grid_row = []
grid = gridplot(grid_array)
#add title, the title changes its x value based on the number of x_groups so that it stays centered
label=Label(x=-len(x_groups), y=6.9, text='{}: Average Pull Request Event Types for {} Closed Pull Requests'.format(repo_dict[repo_id], description), render_mode='css', text_font_size = '17px', text_font_style= 'bold')
p.add_layout(label)
show(grid, plot_width=1200, plot_height=1200)
if save_files:
comments_included = 'comments_included' if include_comments else 'comments_not_included'
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/h_stacked_bar_mean_event_types/mean_event_types__facet_{}__{}_PRs__yaxis_{}__{}__repo_{}.png".format(facet, description, y_axis, comments_included, repo_dict[repo_id]))
# for repo_name in pr_all['repo_name'].unique():
#events_types_heat_map(pr_closed, repo_id=repo_list)
red_green_gradient = linear_gradient('#0080FF', '#DC143C', 150)['hex']
#32CD32
def heat_map(input_df, repo_id, x_axis='repo_name', group_by='merged_flag', y_axis='closed_yearmonth', same_scales=True, description="All Closed", heat_field='pr_duration_days', columns=2, remove_outliers = 0):
output_notebook()
driver_df = input_df.copy()[['repo_id', y_axis, group_by, x_axis, heat_field]]
print(driver_df)
if display_grouping == 'repo':
driver_df = driver_df.loc[driver_df['repo_id'] == repo_id]
driver_df[y_axis] = driver_df[y_axis].astype(str)
# add new group by + xaxis column
driver_df['grouped_x'] = driver_df[x_axis] + ' - ' + driver_df[group_by]
driver_df_mean = driver_df.groupby(['grouped_x', y_axis], as_index=False).mean()
colors = red_green_gradient
y_groups = driver_df_mean[y_axis].unique()
x_groups = sorted(driver_df[x_axis].unique())
grouped_x_groups = sorted(driver_df_mean['grouped_x'].unique())
values = driver_df_mean['pr_duration_days'].values.tolist()
for i in range(0, remove_outliers):
values.remove(max(values))
heat_max = max(values)* 1.02
mapper = LinearColorMapper(palette=colors, low=driver_df_mean[heat_field].min(), high=heat_max)#driver_df_mean[heat_field].max())
source = ColumnDataSource(driver_df_mean)
title_beginning = repo_dict[repo_id] + ':' if not type(repo_id) == type(repo_list) else ''
plot_width = 1100
p = figure(plot_width=plot_width, plot_height=300, title="{} Mean Duration (Days) {} Pull Requests".format(title_beginning,description),
y_range=grouped_x_groups[::-1], x_range=y_groups,
toolbar_location=None, tools="")#, x_axis_location="above")
for x_group in x_groups:
outliers = driver_df_mean.loc[(driver_df_mean[heat_field] > heat_max) & (driver_df_mean['grouped_x'].str.contains(x_group))]
if len(outliers) > 0:
p.add_layout(Title(text="** Outliers capped at {} days: {} outlier(s) for {} were capped at {} **".format(heat_max, len(outliers), x_group, heat_max), align="center"), "below")
p.rect(x=y_axis, y='grouped_x', width=1, height=1, source=source,
line_color=None, fill_color=transform(heat_field, mapper))
color_bar = ColorBar(color_mapper=mapper, location=(0, 0),
ticker=BasicTicker(desired_num_ticks=9),
formatter=PrintfTickFormatter(format="%d"))
p.add_layout(color_bar, 'right')
p.title.align = "center"
p.title.text_font_size = "16px"
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "11pt"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = 1.0
p.xaxis.axis_label = 'Month Closed' if y_axis[0:6] == 'closed' else 'Date Created' if y_axis[0:7] == 'created' else 'Repository' if y_axis == 'repo_name' else ''
# p.yaxis.axis_label = 'Merged Status'
p.title.text_font_size = "16px"
p.xaxis.axis_label_text_font_size = "16px"
p.xaxis.major_label_text_font_size = "14px"
p.yaxis.major_label_text_font_size = "15px"
plot = p
p = figure(width = plot_width, height=200, margin = (0, 0, 0, 0))
caption = "Caption Here"
p.add_layout(Label(
x = 0, # Change to shift caption left or right
y = 160,
x_units = 'screen',
y_units = 'screen',
text='{}'.format(caption),
text_font = 'times', # Use same font as paper
text_font_size = '15pt',
render_mode='css'
))
p.outline_line_color = None
caption_plot = p
grid = gridplot([[plot], [caption_plot]])
show(grid)
if save_files:
repo_extension = 'All' if not repo_id else repo_id
export_png(grid, filename="./images/heat_map_pr_duration_merged_status/heat_map_duration_by_merged_status__{}_PRs__yaxis_{}__repo_{}.png".format(description, y_axis, repo_dict[repo_id]))
#heat_map(pr_closed, repo_id=25502)
if display_grouping == 'repo':
for repo_id in repo_set:
vertical_grouped_bar(pr_all, repo_id=repo_id)
horizontal_stacked_bar(pr_closed, repo_id=repo_id)
merged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_id)
visualize_mean_response_times(pr_closed, repo_id=repo_id, legend_position='center')
visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_id)
visualize_time_to_first_comment(pr_closed, repo_id= repo_id, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
events_types_heat_map(pr_closed, repo_id=repo_id)
#print(pr_closed)
pr_duration_frame = pr_closed.assign(pr_duration=(pr_closed['pr_closed_at'] - pr_closed['pr_created_at']))
pr_duration_frame = pr_duration_frame.assign(pr_duration_days = (pr_duration_frame['pr_duration'] / datetime.timedelta(minutes=1))/60/24)
heat_map(pr_duration_frame, repo_id=repo_id)
elif display_grouping == 'competitors':
vertical_grouped_bar(pr_all, repo_id=repo_list)
horizontal_stacked_bar(pr_closed, repo_id=repo_list)
merged_ratio_vertical_grouped_bar({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
visualize_mean_response_times(pr_closed, repo_id=repo_list, legend_position='center')
visualize_mean_time_between_responses({'All':pr_closed,'Slowest 20%':pr_slow20_not_merged.append(pr_slow20_merged,ignore_index=True)}, repo_id = repo_list)
visualize_time_to_first_comment(pr_closed, repo_id= repo_list, legend_position='top_right', remove_outliers = scatter_plot_outliers_removed)
events_types_heat_map(pr_closed, repo_id=repo_list)
pr_duration_frame = pr_closed.assign(pr_duration=(pr_closed['pr_closed_at'] - pr_closed['pr_created_at']))
pr_duration_frame = pr_duration_frame.assign(pr_duration_days = (pr_duration_frame['pr_duration'] / datetime.timedelta(minutes=1))/60/24)
heat_map(pr_duration_frame, repo_id=repo_list)
###Output
_____no_output_____ |
pythonDataScienceHandbook/02.00-Introduction-to-NumPy.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Introduction to NumPy This chapter, along with chapter 3, outlines techniques for effectively loading, storing, and manipulating in-memory data in Python.The topic is very broad: datasets can come from a wide range of sources and a wide range of formats, including be collections of documents, collections of images, collections of sound clips, collections of numerical measurements, or nearly anything else.Despite this apparent heterogeneity, it will help us to think of all data fundamentally as arrays of numbers.For example, images–particularly digital images–can be thought of as simply two-dimensional arrays of numbers representing pixel brightness across the area.Sound clips can be thought of as one-dimensional arrays of intensity versus time.Text can be converted in various ways into numerical representations, perhaps binary digits representing the frequency of certain words or pairs of words.No matter what the data are, the first step in making it analyzable will be to transform them into arrays of numbers.(We will discuss some specific examples of this process later in [Feature Engineering](05.04-Feature-Engineering.ipynb))For this reason, efficient storage and manipulation of numerical arrays is absolutely fundamental to the process of doing data science.We'll now take a look at the specialized tools that Python has for handling such numerical arrays: the NumPy package, and the Pandas package (discussed in Chapter 3).This chapter will cover NumPy in detail. NumPy (short for *Numerical Python*) provides an efficient interface to store and operate on dense data buffers.In some ways, NumPy arrays are like Python's built-in ``list`` type, but NumPy arrays provide much more efficient storage and data operations as the arrays grow larger in size.NumPy arrays form the core of nearly the entire ecosystem of data science tools in Python, so time spent learning to use NumPy effectively will be valuable no matter what aspect of data science interests you.If you followed the advice outlined in the Preface and installed the Anaconda stack, you already have NumPy installed and ready to go.If you're more the do-it-yourself type, you can go to http://www.numpy.org/ and follow the installation instructions found there.Once you do, you can import NumPy and double-check the version:
###Code
import numpy
numpy.__version__
###Output
_____no_output_____
###Markdown
For the pieces of the package discussed here, I'd recommend NumPy version 1.8 or later.By convention, you'll find that most people in the SciPy/PyData world will import NumPy using ``np`` as an alias:
###Code
import numpy as np
###Output
_____no_output_____ |
ML_projects/5. Traffic Sign Classification with Deep Learning/.ipynb_checkpoints/deeplearning_traffic_sign_classifier_notebook-checkpoint.ipynb | ###Markdown
TASK 1: UNDERSTAND THE PROBLEM STATEMENT - Our goal is to build a multiclassifier model based on deep learning to classify various traffic signs. - Dataset that we are using to train the model is **German Traffic Sign Recognition Benchmark**.- Dataset consists of 43 classes: - ( 0, b'Speed limit (20km/h)') ( 1, b'Speed limit (30km/h)') ( 2, b'Speed limit (50km/h)') ( 3, b'Speed limit (60km/h)') ( 4, b'Speed limit (70km/h)') - ( 5, b'Speed limit (80km/h)') ( 6, b'End of speed limit (80km/h)') ( 7, b'Speed limit (100km/h)') ( 8, b'Speed limit (120km/h)') ( 9, b'No passing') - (10, b'No passing for vehicles over 3.5 metric tons') (11, b'Right-of-way at the next intersection') (12, b'Priority road') (13, b'Yield') (14, b'Stop') - (15, b'No vehicles') (16, b'Vehicles over 3.5 metric tons prohibited') (17, b'No entry')- (18, b'General caution') (19, b'Dangerous curve to the left')- (20, b'Dangerous curve to the right') (21, b'Double curve')- (22, b'Bumpy road') (23, b'Slippery road')- (24, b'Road narrows on the right') (25, b'Road work')- (26, b'Traffic signals') (27, b'Pedestrians') (28, b'Children crossing')- (29, b'Bicycles crossing') (30, b'Beware of ice/snow')- (31, b'Wild animals crossing')- (32, b'End of all speed and passing limits') (33, b'Turn right ahead')- (34, b'Turn left ahead') (35, b'Ahead only') (36, b'Go straight or right')- (37, b'Go straight or left') (38, b'Keep right') (39, b'Keep left')- (40, b'Roundabout mandatory') (41, b'End of no passing')- (42, b'End of no passing by vehicles over 3.5 metric tons')- **Data Source** - https://www.kaggle.com/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign TASK 2: GET THE DATA AND VISUALIZE IT
###Code
import pickle
with open("train.p", mode='rb') as training_data:
train = pickle.load(training_data)
with open("valid.p", mode='rb') as validation_data:
valid = pickle.load(validation_data)
with open("test.p", mode='rb') as testing_data:
test = pickle.load(testing_data)
X_train, y_train = train['features'], train['labels']
X_validation, y_validation = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
X_test.shape
import numpy as np
import matplotlib.pyplot as plt
i = np.random.randint(1, len(X_test))
plt.imshow(X_test[i])
print('label = ', y_test[i])
###Output
label = 25
###Markdown
MINI CHALLENGE- Complete the code below to print out 5 by 5 grid showing random traffic sign images along with their corresponding labels as their titles
###Code
# Let's view more images in a grid format
# Define the dimensions of the plot grid
W_grid = 5
L_grid = 5
# fig, axes = plt.subplots(L_grid, W_grid)
# subplot return the figure object and axes object
# we can use the axes object to plot specific figures at various locations
fig, axes = plt.subplots(L_grid, W_grid, figsize = (10,10))
axes = axes.ravel() # flaten the 15 x 15 matrix into 225 array
n_training = len(X_test) # get the length of the training dataset
# Select a random number from 0 to n_training
for i in np.arange(0, W_grid * L_grid): # create evenly spaces variables
###Output
_____no_output_____
###Markdown
TASK 3: IMPORT SAGEMAKER/BOTO3, CREATE A SESSION, DEFINE S3 AND ROLE
###Code
# Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python
# Boto3 allows Python developer to write software that makes use of services like Amazon S3 and Amazon EC2
import sagemaker
import boto3
# Let's create a Sagemaker session
sagemaker_session = sagemaker.Session()
# Let's define the S3 bucket and prefix that we want to use in this session
bucket = 'sagemaker-practical' # bucket named 'sagemaker-practical' was created beforehand
prefix = 'traffic-sign-classifier' # prefix is the subfolder within the bucket.
# Let's get the execution role for the notebook instance.
# This is the IAM role that you created when you created your notebook instance. You pass the role to the training job.
# Note that AWS Identity and Access Management (IAM) role that Amazon SageMaker can assume to perform tasks on your behalf (for example, reading training results, called model artifacts, from the S3 bucket and writing training results to Amazon S3).
role = sagemaker.get_execution_role()
print(role)
###Output
arn:aws:iam::542063182511:role/service-role/AmazonSageMaker-ExecutionRole-20191104T033920
###Markdown
TASK 4: UPLOAD THE DATA TO S3
###Code
# Create directory to store the training and validation data
import os
os.makedirs("./data", exist_ok = True)
# Save several arrays into a single file in uncompressed .npz format
# Read more here: https://numpy.org/devdocs/reference/generated/numpy.savez.html
np.savez('./data/training', image = X_train, label = y_train)
np.savez('./data/validation', image = X_test, label = y_test)
# Upload the training and validation data to S3 bucket
prefix = 'traffic-sign'
training_input_path = sagemaker_session.upload_data('data/training.npz', key_prefix = prefix + '/training')
validation_input_path = sagemaker_session.upload_data('data/validation.npz', key_prefix = prefix + '/validation')
print(training_input_path)
print(validation_input_path)
###Output
s3://sagemaker-us-east-2-542063182511/traffic-sign/training/training.npz
s3://sagemaker-us-east-2-542063182511/traffic-sign/validation/validation.npz
###Markdown
TASK 5: TRAIN THE CNN LENET MODEL USING SAGEMAKER The model consists of the following layers: - STEP 1: THE FIRST CONVOLUTIONAL LAYER 1 - Input = 32x32x3 - Output = 28x28x6 - Output = (Input-filter+1)/Stride* => (32-5+1)/1=28 - Used a 5x5 Filter with input depth of 3 and output depth of 6 - Apply a RELU Activation function to the output - pooling for input, Input = 28x28x6 and Output = 14x14x6 * Stride is the amount by which the kernel is shifted when the kernel is passed over the image.- STEP 2: THE SECOND CONVOLUTIONAL LAYER 2 - Input = 14x14x6 - Output = 10x10x16 - Layer 2: Convolutional layer with Output = 10x10x16 - Output = (Input-filter+1)/strides => 10 = 14-5+1/1 - Apply a RELU Activation function to the output - Pooling with Input = 10x10x16 and Output = 5x5x16- STEP 3: FLATTENING THE NETWORK - Flatten the network with Input = 5x5x16 and Output = 400- STEP 4: FULLY CONNECTED LAYER - Layer 3: Fully Connected layer with Input = 400 and Output = 120 - Apply a RELU Activation function to the output- STEP 5: ANOTHER FULLY CONNECTED LAYER - Layer 4: Fully Connected Layer with Input = 120 and Output = 84 - Apply a RELU Activation function to the output- STEP 6: FULLY CONNECTED LAYER - Layer 5: Fully Connected layer with Input = 84 and Output = 43
###Code
!pygmentize train-cnn.py
from sagemaker.tensorflow import TensorFlow
# To Train a TensorFlow model, we will use TensorFlow estimator from the Sagemaker SDK
# entry_point: a script that will run in a container. This script will include model description and training.
# role: a role that's obtained The role assigned to the running notebook.
# train_instance_count: number of container instances used to train the model.
# train_instance_type: instance type!
# framwork_version: version of Tensorflow
# py_version: Python version.
# script_mode: allows for running script in the container.
# hyperparameters: indicate the hyperparameters for the training job such as epochs and learning rate
tf_estimator = TensorFlow(entry_point='train-cnn.py',
role=role,
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
framework_version='1.12',
py_version='py3',
script_mode=True,
hyperparameters={
'epochs': 2 ,
'batch-size': 32,
'learning-rate': 0.001}
)
tf_estimator.fit({'training': training_input_path, 'validation': validation_input_path})
###Output
2021-03-25 19:08:12 Starting - Starting the training job...
2021-03-25 19:08:35 Starting - Launching requested ML instancesProfilerReport-1616699291: InProgress
......
2021-03-25 19:09:35 Starting - Preparing the instances for training.
###Markdown
TASK 7: DEPLOY THE MODEL WITHOUT ACCELERATORS
###Code
# Deploying the model
import time
tf_endpoint_name = 'trafficsignclassifier-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
tf_predictor = tf_estimator.deploy(initial_instance_count = 1,
instance_type = 'ml.t2.medium',
endpoint_name = tf_endpoint_name)
# Making predictions from the end point
%matplotlib inline
import random
import matplotlib.pyplot as plt
#Pre-processing the images
num_samples = 5
indices = random.sample(range(X_test.shape[0] - 1), num_samples)
images = X_test[indices]/255
labels = y_test[indices]
for i in range(num_samples):
plt.subplot(1,num_samples,i+1)
plt.imshow(images[i])
plt.title(labels[i])
plt.axis('off')
# Making predictions
prediction = tf_predictor.predict(images.reshape(num_samples, 32, 32, 3))['predictions']
prediction = np.array(prediction)
predicted_label = prediction.argmax(axis=1)
print('Predicted labels are: {}'.format(predicted_label))
# Deleting the end-point
tf_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
MINI CHALLENGE (TAKE HOME) - Try to improve the model accuracy by experimenting with Dropout, adding more convolutional layers, and changing the size of the filters EXCELLENT JOB MINI CHALLENGE SOLUTIONS
###Code
# Select a random number
index = np.random.randint(0, n_training)
# read and display an image with the selected index
axes[i].imshow( X_test[index])
axes[i].set_title(y_test[index], fontsize = 15)
axes[i].axis('off')
plt.subplots_adjust(hspace=0.4)
###Output
_____no_output_____ |
Notebooks/DeforestationDataWrangle_RC2_CV.ipynb | ###Markdown
Deforestation Data Wrangle (RC2) Imports.
###Code
# imports.
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import seaborn as sns
pd.set_option('display.float_format', lambda x: '%.2f' % x)
# read the the data files.
forest = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/target/Forestarea(%25land_area).csv', skiprows= 3)
mining = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/Oresandmetalsexports(%25ofmerchandiseexports).csv', skiprows=3)
livestock = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/Livestockproductionindex(2004-2006%3D100).csv', skiprows=3)
agriculture = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/Agriculturalland(sq.km).csv', skiprows=3)
population = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/UrbanPopulationTotal.csv', skiprows=3)
gdp = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/GDPpercapitagrowth(annual%20%25).csv', skiprows=3)
electricity = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/Electricpowerconsumption(kWhpercapita).csv', skiprows=3)
crops = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/Cropproductionindex(2004-2006%3D100).csv', skiprows=3)
food = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/DataSets/master/WorldBankDeforestation/features/Foodproductionindex(2004-2006%3D100).csv', skiprows=3)
###Output
_____no_output_____
###Markdown
1st Wrangle Cleaning.
###Code
# 1st wrangle for cleaning.
def wrangle(df):
df.drop(df.iloc[:, 3:34], inplace=True, axis=1)
df = df.drop(columns=['2019', 'Unnamed: 64'])
for col in df.select_dtypes(include=np.number):
df[col] = df[col].fillna(df[col].median())
df = df.fillna(method='bfill', axis= 1)
df = df.fillna(method='ffill', axis= 1)
year = map(str, range(1990, 2019))
feature = df.iloc[0][2]
df = pd.concat([pd.melt(df, id_vars=['Country Code'], value_vars=val, var_name='Year', value_name=feature) for val in year])
return(df)
# wrangle the data.
forest = wrangle(forest)
agriculture = wrangle(agriculture)
electricity = wrangle(electricity)
gdp = wrangle(gdp)
livestock = wrangle(livestock)
mining = wrangle(mining)
population = wrangle(population)
crops = wrangle(crops)
food = wrangle(food)
feature_dfs = [agriculture, gdp, livestock, population, crops, food, mining, electricity]
# merge the data files.
def merge_features(list_dfs):
train = list_dfs.pop(0)
for df in list_dfs:
train = train.merge(df, on=['Country Code', 'Year'])
return(train)
# merge with forest.
features = merge_features(feature_dfs)
train = features.merge(forest, on=['Country Code', 'Year'])
print(train.shape)
train.head()
# Download the csv.
from google.colab import files
train.to_csv('WorldBank_1990_2018.csv')
files.download('WorldBank_1990_2018.csv')
###Output
_____no_output_____
###Markdown
2nd Wrangle Predictions Dataframe.
###Code
# 2nd wrangle to make predictions data frame.
def predicitons_df(df):
model = LinearRegression()
codes = df['Country Code'].unique()
years = [year for year in range(2019, 2121)]
rows = []
feature = df.columns.tolist()[2]
for code in codes:
dictionary = {'Country Code': code}
model.fit(df[df['Country Code'] == code][['Year']],
df[df['Country Code'] == code][feature])
for year in years:
prediction = model.predict([[year]])
dictionary[str(year)] = prediction[0]
rows.append(dictionary)
df_predictions = pd.DataFrame(rows)
df_predictions = df_predictions[
['Country Code'] + [str(year) for year in years]]
year = map(str, range(2019, 2121))
df_predictions = pd.concat([pd.melt(df_predictions, id_vars=['Country Code'], value_vars=val, var_name='Year', value_name=feature) for val in year])
return(df_predictions)
# wrangle the data.
agriculture_pred = predicitons_df(agriculture)
electricity_pred = predicitons_df(electricity)
gdp_pred = predicitons_df(gdp)
livestock_pred = predicitons_df(livestock)
mining_pred = predicitons_df(mining)
population_pred = predicitons_df(population)
crops_pred = predicitons_df(crops)
food_pred = predicitons_df(food)
forest_pred = predicitons_df(forest)
feature_dfs_pred = [agriculture_pred, gdp_pred, livestock_pred, population_pred, crops_pred, food_pred, mining_pred, electricity_pred]
# merge the data files.
def merge_pred_features(list_dfs_pred):
test = list_dfs_pred.pop(0)
for df in list_dfs_pred:
test = test.merge(df, on=['Country Code', 'Year'])
return(test)
# merge with forest.
features = merge_pred_features(feature_dfs_pred)
test = features.merge(forest_pred, on=['Country Code', 'Year'])
print(test.shape)
test.head()
# Download the csv file.
from google.colab import files
test.to_csv('WorldBank_2019_2120.csv')
files.download('WorldBank_2019_2120.csv')
###Output
_____no_output_____ |
Documentation/Examples/jupyter/Linear_Simulation_Fit.ipynb | ###Markdown
Linear Data Simulation and FitThis notebook creates linear data with noise and fits it. The residuals of the fit and their distribution is also displayed. This code generates a line $f(x)= m\times x+b$, from X_MIN to X_MAX with a random number added from a gaussian distribution with zero mean. Uses packages1. [numpy][np]1. [scipy][sp]1. [matplotlib][plt] 2. [ipwidgets][ip]3. [random][rd]4. [types][types]5. [math][math]Example output :__Output of interact_with_plot()__Linear Plot With Small Noise and Negative SlopeLinear Plot With Large Noise and Positive Slope__Output of interact_with_residuals()__Linear Plot And Residuals With No Noise and Positive SlopeLinear Plot With Noise and Positive Slope[np]:http://docs.scipy.org/doc/numpy/reference/?v=20160706100716[sp]:http://docs.scipy.org/doc/scipy/reference/?v=20160706100716[plt]:http://matplotlib.org/contents.html?v=20160706100716[rd]:https://docs.python.org/2.7/library/random.html?highlight=randommodule-random[ip]:http://ipywidgets.readthedocs.io/en/latest/[types]:https://docs.python.org/2/library/types.html[math]:https://docs.python.org/2/library/math.html
###Code
# import needed libraries
import numpy as np
import scipy.optimize as so
import matplotlib.pyplot as plt
from ipywidgets import *
import random
from types import *
import math
# Define Constants
# Constants that determine the span of the line in the x-axis
X_MIN=-10
X_MAX=10
ListType=list
# Define Functions
# Define a function that finds the optimized least squared fit to a function
def fit(function,xdata,ydata,a0):
"Fit returns a least square fit "
error_function=lambda a, xdata, ydata:function(a,xdata)-ydata
a,success=so.leastsq(error_function, a0,args=(xdata,ydata))
return a
# Define a linear function
def line_function(a,x):
"line function (y=a[1]x+a[0])"
return a[1]*x+a[0]
# Define a function that finds residuals given a fit function and fit parameters and an original data set
def find_residuals(fit_function,fit_parameters,x_data,y_data):
"""Returns the residuals for a fit"""
if type(x_data) in [np.ndarray,ListType]:
output=map(lambda x:fit_function(fit_parameters,x),x_data)
if type(y_data) is not ListType:
raise
output=[f_x-y_data[index] for index,f_x in enumerate(output)]
elif type(x_data) is FloatType:
output=fit_function(fit_parameters,x_data)-y_data
else:
output=None
return output
# Define a function to plot a line and a fit through that line
def plot_line(noise_magnitude,number_points,slope,intercept):
"A function to plot a line with noise"
data_list=np.linspace(X_MIN,X_MAX,number_points)
y_data=[slope*x+intercept+random.gauss(0,noise_magnitude) for x in data_list]
results=fit(line_function,data_list,y_data,[1,0])
y_fit=[line_function(results,x) for x in data_list]
#plot the data
plt.plot(data_list,y_data,'ob')
#plot the fit
plt.plot(data_list,y_fit,'r-',linewidth=5)
ax=plt.gca()
ax.set_ylim(-300,300)
ax.set_title('y = {0:3.2f} x + {1:3.2f}'.format(results[1],results[0]))
plt.show()
# Define a plotting function that shows a line, a fit through that line, the residuals of the fit and a histogram
# of those residuals
def plot_residuals(noise_magnitude,number_points,slope,intercept):
"A function to plot a line with noise and the residuals of that fit including a histogram of those residuals"
data_list=np.linspace(X_MIN,X_MAX,number_points)
y_data=[slope*x+intercept+random.gauss(0,noise_magnitude) for x in data_list]
results=fit(line_function,data_list,y_data,[1,0])
y_fit=[line_function(results,x) for x in data_list]
#plot the data
# Comment this line to change the plot layout
fig, (ax0, ax1, ax2) = plt.subplots(nrows=3)
# Uncomment these lines to change the laytout
# fig = plt.figure()
# ax0 = plt.subplot(221)
# ax1 = plt.subplot(223)
# ax2 = plt.subplot(122)
ax0.plot(data_list,y_data,'ob')
# plot the fit
ax0.plot(data_list,y_fit,'r-',linewidth=5)
ax0.set_ylim(-300,300)
ax0.set_title('y = {0:3.2f} x + {1:3.2f}'.format(results[1],results[0]))
# find the residuals
residuals=find_residuals(line_function,results,data_list,y_data)
# plot the residuals
ax1.plot(data_list,residuals,'r^')
ax1.set_ylim(-100,100)
# plot a histogram of the residuals
ax2.hist(residuals,bins=int(math.floor(math.sqrt(number_points))))
ax2.set_ylim(0,100)
ax2.set_xlim(-200,200)
# set the plot titles
ax1.set_title('Residuals')
ax2.set_title('Residual Distrubution')
# display
plt.tight_layout()
plt.show()
# define scripts calling these create interactive plots
def interact_with_plot():
%matplotlib inline
interact(plot_line,noise_magnitude=(0,100,1),number_points=(10,1000,10),slope=(-30,30,.1),intercept=(-200,200,1))
# Test the find_residuals function
def residual_script():
data_list=np.linspace(X_MIN,X_MAX,1000)
y_data=[5*x+10+random.gauss(0,5) for x in data_list]
results=fit(line_function,data_list,y_data,[1,0])
print(find_residuals(line_function,results,data_list,y_data))
def interact_with_residuals():
%matplotlib inline
interact(plot_residuals,noise_magnitude=(0,100,1),
number_points=(10,1000,10),slope=(-30,30,.1),intercept=(-200,200,1))
#%matplotlib notebook
plot_residuals(10,1000,5,7)
interact_with_plot()
interact_with_residuals()
###Output
_____no_output_____ |
aws_sagemaker_studio/frameworks/pytorch_cnn_cifar10/pytorch_cnn_cifar10.ipynb | ###Markdown
Training and Hosting a PyTorch model in Amazon SageMaker*(This notebook was tested with the "Python 3 (PyTorch CPU Optimized)" kernel.)*Amazon SageMaker is a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including PyTorch.In this notebook, we use Amazon SageMaker to train a convolutional neural network using PyTorch and the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and then we host the model in Amazon SageMaker for inference. SetupLet's start by specifying:- An Amazon S3 bucket and prefix for training and model data. This should be in the same region used for SageMaker Studio, training, and hosting.- An IAM role for SageMaker to access to your training and model data. If you wish to use a different role than the one set up for SageMaker Studio, replace `sagemaker.get_execution_role()` with the appropriate IAM role or ARN. For more about using IAM roles with SageMaker, see [the AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "pytorch-cnn-cifar10-example"
role = sagemaker.get_execution_role()
###Output
_____no_output_____
###Markdown
Prepare the training dataThe [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a subset of the [80 million tiny images dataset](https://people.csail.mit.edu/torralba/tinyimages). It consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. Download the dataFirst we download the dataset:
###Code
%%bash
wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
tar xfvz cifar-10-python.tar.gz
mkdir data
mv cifar-10-batches-py data/.
rm cifar-10-python.tar.gz
###Output
cifar-10-batches-py/
cifar-10-batches-py/data_batch_4
cifar-10-batches-py/readme.html
cifar-10-batches-py/test_batch
cifar-10-batches-py/data_batch_3
cifar-10-batches-py/batches.meta
cifar-10-batches-py/data_batch_2
cifar-10-batches-py/data_batch_5
cifar-10-batches-py/data_batch_1
###Markdown
After downloading the dataset, we use the [`torchvision.datasets` module](https://pytorch.org/docs/stable/torchvision/datasets.html) to load the CIFAR-10 dataset, utilizing the [`torchvision.transforms` module](https://pytorch.org/docs/stable/torchvision/transforms.html) to convert the data into normalized tensor images:
###Code
from cifar_utils import classes, show_img, train_data_loader, test_data_loader
train_loader = train_data_loader()
test_loader = test_data_loader()
###Output
_____no_output_____
###Markdown
Preview the dataNow we can view some of data we have prepared:
###Code
import numpy as np
import torchvision, torch
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
show_img(torchvision.utils.make_grid(images))
# print labels
print(" ".join("%9s" % classes[labels[j]] for j in range(4)))
###Output
truck car bird bird
###Markdown
Upload the dataWe use the `sagemaker.s3.S3Uploader` to upload our dataset to Amazon S3. The return value `inputs` identifies the location -- we use this later for the training job.
###Code
from sagemaker.s3 import S3Uploader
inputs = S3Uploader.upload("data", "s3://{}/{}/data".format(bucket, prefix))
###Output
_____no_output_____
###Markdown
Prepare the entry-point scriptWhen SageMaker trains and hosts our model, it runs a Python script that we provide. (This is run as the entry point of a Docker container.) For training, this script contains the PyTorch code needed for the model to learn from our dataset. For inference, the code is for loading the model and processing the prediction input. For convenience, we put both the training and inference code in the same file. TrainingThe training code is very similar to a training script we might run outside of Amazon SageMaker, but we can access useful properties about the training environment through various environment variables. For this notebook, our script retrieves the following environment variable values:* `SM_HOSTS`: a list of hosts on the container network.* `SM_CURRENT_HOST`: the name of the current container on the container network.* `SM_MODEL_DIR`: the location for model artifacts. This directory is uploaded to Amazon S3 at the end of the training job.* `SM_CHANNEL_TRAINING`: the location of our training data.* `SM_NUM_GPUS`: the number of GPUs available to the current container.We also use a main guard (`if __name__=='__main__':`) to ensure that our training code is executed only for training, as SageMaker imports the entry-point script.For more about writing a PyTorch training script with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlprepare-a-pytorch-training-script). InferenceFor inference, we need to implement a few specific functions to tell SageMaker how to load our model and handle prediction input.* `model_fn(model_dir)`: loads the model from disk. This function must be implemented.* `input_fn(serialized_input_data, content_type)`: deserializes the prediction input.* `predict_fn(input_data, model)`: calls the model on the deserialized data.* `output_fn(prediction_output, accept)`: serializes the prediction output.The last three functions - `input_fn`, `predict_fn`, and `output_fn` - are optional because SageMaker has default implementations to handle common content types. However, there is no default implementation of `model_fn` for PyTorch models on SageMaker, so our script has to implement `model_fn`.For more about PyTorch inference with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlid3). Put it all togetherHere is the full script for both training and hosting our convolutional neural network:
###Code
!pygmentize source/cifar10.py
###Output
[37m# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.[39;49;00m
[37m#[39;49;00m
[37m# Licensed under the Apache License, Version 2.0 (the "License"). You[39;49;00m
[37m# may not use this file except in compliance with the License. A copy of[39;49;00m
[37m# the License is located at[39;49;00m
[37m#[39;49;00m
[37m# http://aws.amazon.com/apache2.0/[39;49;00m
[37m#[39;49;00m
[37m# or in the "license" file accompanying this file. This file is[39;49;00m
[37m# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF[39;49;00m
[37m# ANY KIND, either express or implied. See the License for the specific[39;49;00m
[37m# language governing permissions and limitations under the License.[39;49;00m
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mlogging[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mdistributed[39;49;00m [34mas[39;49;00m [04m[36mdist[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m[04m[36m.[39;49;00m[04m[36mfunctional[39;49;00m [34mas[39;49;00m [04m[36mF[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m[04m[36m.[39;49;00m[04m[36mparallel[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mutils[39;49;00m[04m[36m.[39;49;00m[04m[36mdata[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mutils[39;49;00m[04m[36m.[39;49;00m[04m[36mdata[39;49;00m[04m[36m.[39;49;00m[04m[36mdistributed[39;49;00m
[34mimport[39;49;00m [04m[36mtorchvision[39;49;00m
[34mimport[39;49;00m [04m[36mtorchvision[39;49;00m[04m[36m.[39;49;00m[04m[36mmodels[39;49;00m
[34mimport[39;49;00m [04m[36mtorchvision[39;49;00m[04m[36m.[39;49;00m[04m[36mtransforms[39;49;00m [34mas[39;49;00m [04m[36mtransforms[39;49;00m
logger = logging.getLogger([31m__name__[39;49;00m)
logger.setLevel(logging.DEBUG)
classes = ([33m"[39;49;00m[33mplane[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mcar[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mbird[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mcat[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mdeer[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mdog[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mfrog[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mhorse[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mship[39;49;00m[33m"[39;49;00m, [33m"[39;49;00m[33mtruck[39;49;00m[33m"[39;49;00m)
[37m# https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py#L118[39;49;00m
[34mclass[39;49;00m [04m[32mNet[39;49;00m(nn.Module):
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m):
[36msuper[39;49;00m(Net, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.conv1 = nn.Conv2d([34m3[39;49;00m, [34m6[39;49;00m, [34m5[39;49;00m)
[36mself[39;49;00m.pool = nn.MaxPool2d([34m2[39;49;00m, [34m2[39;49;00m)
[36mself[39;49;00m.conv2 = nn.Conv2d([34m6[39;49;00m, [34m16[39;49;00m, [34m5[39;49;00m)
[36mself[39;49;00m.fc1 = nn.Linear([34m16[39;49;00m * [34m5[39;49;00m * [34m5[39;49;00m, [34m120[39;49;00m)
[36mself[39;49;00m.fc2 = nn.Linear([34m120[39;49;00m, [34m84[39;49;00m)
[36mself[39;49;00m.fc3 = nn.Linear([34m84[39;49;00m, [34m10[39;49;00m)
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
x = [36mself[39;49;00m.pool(F.relu([36mself[39;49;00m.conv1(x)))
x = [36mself[39;49;00m.pool(F.relu([36mself[39;49;00m.conv2(x)))
x = x.view(-[34m1[39;49;00m, [34m16[39;49;00m * [34m5[39;49;00m * [34m5[39;49;00m)
x = F.relu([36mself[39;49;00m.fc1(x))
x = F.relu([36mself[39;49;00m.fc2(x))
x = [36mself[39;49;00m.fc3(x)
[34mreturn[39;49;00m x
[34mdef[39;49;00m [32m_train[39;49;00m(args):
is_distributed = [36mlen[39;49;00m(args.hosts) > [34m1[39;49;00m [35mand[39;49;00m args.dist_backend [35mis[39;49;00m [35mnot[39;49;00m [34mNone[39;49;00m
logger.debug([33m"[39;49;00m[33mDistributed training - [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(is_distributed))
[34mif[39;49;00m is_distributed:
[37m# Initialize the distributed environment.[39;49;00m
world_size = [36mlen[39;49;00m(args.hosts)
os.environ[[33m"[39;49;00m[33mWORLD_SIZE[39;49;00m[33m"[39;49;00m] = [36mstr[39;49;00m(world_size)
host_rank = args.hosts.index(args.current_host)
os.environ[[33m"[39;49;00m[33mRANK[39;49;00m[33m"[39;49;00m] = [36mstr[39;49;00m(host_rank)
dist.init_process_group(backend=args.dist_backend, rank=host_rank, world_size=world_size)
logger.info(
[33m"[39;49;00m[33mInitialized the distributed environment: [39;49;00m[33m'[39;49;00m[33m{}[39;49;00m[33m'[39;49;00m[33m backend on [39;49;00m[33m{}[39;49;00m[33m nodes. [39;49;00m[33m"[39;49;00m.format(
args.dist_backend, dist.get_world_size()
)
+ [33m"[39;49;00m[33mCurrent host rank is [39;49;00m[33m{}[39;49;00m[33m. Using cuda: [39;49;00m[33m{}[39;49;00m[33m. Number of gpus: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(
dist.get_rank(), torch.cuda.is_available(), args.num_gpus
)
)
device = [33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m
logger.info([33m"[39;49;00m[33mDevice Type: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(device))
logger.info([33m"[39;49;00m[33mLoading Cifar10 dataset[39;49;00m[33m"[39;49;00m)
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize(([34m0.5[39;49;00m, [34m0.5[39;49;00m, [34m0.5[39;49;00m), ([34m0.5[39;49;00m, [34m0.5[39;49;00m, [34m0.5[39;49;00m))]
)
trainset = torchvision.datasets.CIFAR10(
root=args.data_dir, train=[34mTrue[39;49;00m, download=[34mFalse[39;49;00m, transform=transform
)
train_loader = torch.utils.data.DataLoader(
trainset, batch_size=args.batch_size, shuffle=[34mTrue[39;49;00m, num_workers=args.workers
)
logger.info([33m"[39;49;00m[33mModel loaded[39;49;00m[33m"[39;49;00m)
model = Net()
[34mif[39;49;00m torch.cuda.device_count() > [34m1[39;49;00m:
logger.info([33m"[39;49;00m[33mGpu count: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(torch.cuda.device_count()))
model = nn.DataParallel(model)
model = model.to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
[34mfor[39;49;00m epoch [35min[39;49;00m [36mrange[39;49;00m([34m0[39;49;00m, args.epochs):
running_loss = [34m0.0[39;49;00m
[34mfor[39;49;00m i, data [35min[39;49;00m [36menumerate[39;49;00m(train_loader):
[37m# get the inputs[39;49;00m
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
[37m# zero the parameter gradients[39;49;00m
optimizer.zero_grad()
[37m# forward + backward + optimize[39;49;00m
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
[37m# print statistics[39;49;00m
running_loss += loss.item()
[34mif[39;49;00m i % [34m2000[39;49;00m == [34m1999[39;49;00m: [37m# print every 2000 mini-batches[39;49;00m
[36mprint[39;49;00m([33m"[39;49;00m[33m[[39;49;00m[33m%d[39;49;00m[33m, [39;49;00m[33m%5d[39;49;00m[33m] loss: [39;49;00m[33m%.3f[39;49;00m[33m"[39;49;00m % (epoch + [34m1[39;49;00m, i + [34m1[39;49;00m, running_loss / [34m2000[39;49;00m))
running_loss = [34m0.0[39;49;00m
[36mprint[39;49;00m([33m"[39;49;00m[33mFinished Training[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m _save_model(model, args.model_dir)
[34mdef[39;49;00m [32m_save_model[39;49;00m(model, model_dir):
logger.info([33m"[39;49;00m[33mSaving the model.[39;49;00m[33m"[39;49;00m)
path = os.path.join(model_dir, [33m"[39;49;00m[33mmodel.pth[39;49;00m[33m"[39;49;00m)
[37m# recommended way from http://pytorch.org/docs/master/notes/serialization.html[39;49;00m
torch.save(model.cpu().state_dict(), path)
[34mif[39;49;00m [31m__name__[39;49;00m == [33m"[39;49;00m[33m__main__[39;49;00m[33m"[39;49;00m:
parser = argparse.ArgumentParser()
parser.add_argument(
[33m"[39;49;00m[33m--workers[39;49;00m[33m"[39;49;00m,
[36mtype[39;49;00m=[36mint[39;49;00m,
default=[34m2[39;49;00m,
metavar=[33m"[39;49;00m[33mW[39;49;00m[33m"[39;49;00m,
help=[33m"[39;49;00m[33mnumber of data loading workers (default: 2)[39;49;00m[33m"[39;49;00m,
)
parser.add_argument(
[33m"[39;49;00m[33m--epochs[39;49;00m[33m"[39;49;00m,
[36mtype[39;49;00m=[36mint[39;49;00m,
default=[34m2[39;49;00m,
metavar=[33m"[39;49;00m[33mE[39;49;00m[33m"[39;49;00m,
help=[33m"[39;49;00m[33mnumber of total epochs to run (default: 2)[39;49;00m[33m"[39;49;00m,
)
parser.add_argument(
[33m"[39;49;00m[33m--batch_size[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m4[39;49;00m, metavar=[33m"[39;49;00m[33mBS[39;49;00m[33m"[39;49;00m, help=[33m"[39;49;00m[33mbatch size (default: 4)[39;49;00m[33m"[39;49;00m
)
parser.add_argument(
[33m"[39;49;00m[33m--lr[39;49;00m[33m"[39;49;00m,
[36mtype[39;49;00m=[36mfloat[39;49;00m,
default=[34m0.001[39;49;00m,
metavar=[33m"[39;49;00m[33mLR[39;49;00m[33m"[39;49;00m,
help=[33m"[39;49;00m[33minitial learning rate (default: 0.001)[39;49;00m[33m"[39;49;00m,
)
parser.add_argument(
[33m"[39;49;00m[33m--momentum[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mfloat[39;49;00m, default=[34m0.9[39;49;00m, metavar=[33m"[39;49;00m[33mM[39;49;00m[33m"[39;49;00m, help=[33m"[39;49;00m[33mmomentum (default: 0.9)[39;49;00m[33m"[39;49;00m
)
parser.add_argument(
[33m"[39;49;00m[33m--dist_backend[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=[33m"[39;49;00m[33mgloo[39;49;00m[33m"[39;49;00m, help=[33m"[39;49;00m[33mdistributed backend (default: gloo)[39;49;00m[33m"[39;49;00m
)
parser.add_argument([33m"[39;49;00m[33m--hosts[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=json.loads, default=os.environ[[33m"[39;49;00m[33mSM_HOSTS[39;49;00m[33m"[39;49;00m])
parser.add_argument([33m"[39;49;00m[33m--current-host[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m"[39;49;00m[33mSM_CURRENT_HOST[39;49;00m[33m"[39;49;00m])
parser.add_argument([33m"[39;49;00m[33m--model-dir[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m"[39;49;00m[33mSM_MODEL_DIR[39;49;00m[33m"[39;49;00m])
parser.add_argument([33m"[39;49;00m[33m--data-dir[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m"[39;49;00m[33mSM_CHANNEL_TRAINING[39;49;00m[33m"[39;49;00m])
parser.add_argument([33m"[39;49;00m[33m--num-gpus[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=os.environ[[33m"[39;49;00m[33mSM_NUM_GPUS[39;49;00m[33m"[39;49;00m])
_train(parser.parse_args())
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
logger.info([33m"[39;49;00m[33mmodel_fn[39;49;00m[33m"[39;49;00m)
device = [33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m
model = Net()
[34mif[39;49;00m torch.cuda.device_count() > [34m1[39;49;00m:
logger.info([33m"[39;49;00m[33mGpu count: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(torch.cuda.device_count()))
model = nn.DataParallel(model)
[34mwith[39;49;00m [36mopen[39;49;00m(os.path.join(model_dir, [33m"[39;49;00m[33mmodel.pth[39;49;00m[33m"[39;49;00m), [33m"[39;49;00m[33mrb[39;49;00m[33m"[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[34mreturn[39;49;00m model.to(device)
###Markdown
Run a SageMaker training jobThe SageMaker Python SDK makes it easy for us to interact with SageMaker. Here, we use the `PyTorch` estimator class to start a training job. We configure it with the following parameters:* `entry_point`: our training script.* `role`: an IAM role that SageMaker uses to access training and model data.* `framework_version`: the PyTorch version we wish to use. For a list of supported versions, see [here](https://github.com/aws/sagemaker-python-sdkpytorch-sagemaker-estimators).* `instance_count`: the number of training instances.* `instance_type`: the training instance type. For a list of supported instance types, see [the AWS Documentation](https://aws.amazon.com/sagemaker/pricing/instance-types/).Once we our `PyTorch` estimator, we start a training job by calling `fit()` and passing the training data we uploaded to S3 earlier.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(
entry_point="source/cifar10.py",
role=role,
framework_version="1.4.0",
py_version='py3',
instance_count=1,
instance_type="ml.c5.xlarge",
)
estimator.fit(inputs)
###Output
2021-06-03 18:26:37 Starting - Starting the training job...
2021-06-03 18:27:01 Starting - Launching requested ML instancesProfilerReport-1622744796: InProgress
......
2021-06-03 18:28:01 Starting - Preparing the instances for training......
2021-06-03 18:29:01 Downloading - Downloading input data
2021-06-03 18:29:01 Training - Downloading the training image.....[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2021-06-03 18:29:41,665 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training[0m
[34m2021-06-03 18:29:41,668 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-06-03 18:29:41,678 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[34m2021-06-03 18:29:44,701 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[34m2021-06-03 18:29:45,128 sagemaker-containers INFO Module default_user_module_name does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2021-06-03 18:29:45,128 sagemaker-containers INFO Generating setup.cfg[0m
[34m2021-06-03 18:29:45,128 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2021-06-03 18:29:45,129 sagemaker-containers INFO Installing module with the following command:[0m
[34m/opt/conda/bin/python3.6 -m pip install . [0m
[34mProcessing /tmp/tmpvbbkfn_u/module_dir[0m
[34mBuilding wheels for collected packages: default-user-module-name
Building wheel for default-user-module-name (setup.py): started[0m
[34m Building wheel for default-user-module-name (setup.py): finished with status 'done'
Created wheel for default-user-module-name: filename=default_user_module_name-1.0.0-py2.py3-none-any.whl size=7838 sha256=c931aceea70d0a1f63eee7e8d3cb872544194d2b77ae5b0e3a43f1d1991722bf
Stored in directory: /tmp/pip-ephem-wheel-cache-zmpg4d5z/wheels/82/a0/c0/72b33b203d91e19a9f95f893e55e8bc204086f9025e177a224[0m
[34mSuccessfully built default-user-module-name[0m
[34mInstalling collected packages: default-user-module-name[0m
[34mSuccessfully installed default-user-module-name-1.0.0[0m
[34m2021-06-03 18:29:47,407 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-06-03 18:29:47,419 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-06-03 18:29:47,430 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-06-03 18:29:47,439 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "pytorch-training-2021-06-03-18-26-36-486",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-west-2-688520471316/pytorch-training-2021-06-03-18-26-36-486/source/sourcedir.tar.gz",
"module_name": "cifar10",
"network_interface_name": "eth0",
"num_cpus": 4,
"num_gpus": 0,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "cifar10.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={}[0m
[34mSM_USER_ENTRY_POINT=cifar10.py[0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=["training"][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=cifar10[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=4[0m
[34mSM_NUM_GPUS=0[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-west-2-688520471316/pytorch-training-2021-06-03-18-26-36-486/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"pytorch-training-2021-06-03-18-26-36-486","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-west-2-688520471316/pytorch-training-2021-06-03-18-26-36-486/source/sourcedir.tar.gz","module_name":"cifar10","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"cifar10.py"}[0m
[34mSM_USER_ARGS=[][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[34mPYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
[0m
[34mInvoking script with the following command:
[0m
[34m/opt/conda/bin/python3.6 cifar10.py
[0m
[34m[2021-06-03 18:29:49.550 algo-1:44 INFO json_config.py:90] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[2021-06-03 18:29:49.550 algo-1:44 INFO hook.py:192] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[2021-06-03 18:29:49.550 algo-1:44 INFO hook.py:237] Saving to /opt/ml/output/tensors[0m
[34m[2021-06-03 18:29:49.550 algo-1:44 INFO state_store.py:67] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.[0m
[34m[2021-06-03 18:29:49.551 algo-1:44 INFO hook.py:382] Monitoring the collections: losses[0m
[34m[2021-06-03 18:29:49.551 algo-1:44 INFO hook.py:443] Hook is writing from the hook with pid: 44
[0m
2021-06-03 18:30:02 Training - Training image download completed. Training in progress.[34m[1, 2000] loss: 2.184[0m
[34m[1, 4000] loss: 1.845[0m
[34m[1, 6000] loss: 1.695[0m
[34m[1, 8000] loss: 1.600[0m
[34m[1, 10000] loss: 1.546[0m
[34m[1, 12000] loss: 1.487[0m
[34m[2, 2000] loss: 1.444[0m
[34m[2, 4000] loss: 1.407[0m
[34m[2, 6000] loss: 1.388[0m
[34m[2, 8000] loss: 1.358[0m
[34m[2, 10000] loss: 1.331[0m
[34m[2, 12000] loss: 1.319[0m
[34mFinished Training[0m
[34m2021-06-03 18:31:50,581 sagemaker-training-toolkit INFO Reporting training SUCCESS[0m
2021-06-03 18:32:03 Uploading - Uploading generated training model
2021-06-03 18:32:03 Completed - Training job completed
Training seconds: 191
Billable seconds: 191
###Markdown
Deploy the model for inferenceAfter we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint:
###Code
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
###Output
-----!
###Markdown
Invoke the endpointWe then use the returned `predictor` object to invoke our endpoint. For demonstration purposes, we also print out the image, its original label, and its predicted label.
###Code
# get some test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# print images, labels, and predictions
show_img(torchvision.utils.make_grid(images))
print("GroundTruth: ", " ".join("%4s" % classes[labels[j]] for j in range(4)))
outputs = predictor.predict(images.numpy())
_, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1)
print("Predicted: ", " ".join("%4s" % classes[predicted[j]] for j in range(4)))
###Output
GroundTruth: cat ship ship plane
Predicted: cat car car ship
###Markdown
CleanupOnce finished, we delete our endpoint to release the instances (and avoid incurring extra costs).
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Training and Hosting a PyTorch model in Amazon SageMaker*(This notebook was tested with the "Python 3 (PyTorch CPU Optimized)" kernel.)*Amazon SageMaker is a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including PyTorch.In this notebook, we use Amazon SageMaker to train a convolutional neural network using PyTorch and the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and then we host the model in Amazon SageMaker for inference. SetupLet's start by specifying:- An Amazon S3 bucket and prefix for training and model data. This should be in the same region used for SageMaker Studio, training, and hosting.- An IAM role for SageMaker to access to your training and model data. If you wish to use a different role than the one set up for SageMaker Studio, replace `sagemaker.get_execution_role()` with the appropriate IAM role or ARN. For more about using IAM roles with SageMaker, see [the AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'pytorch-cnn-cifar10-example'
role = sagemaker.get_execution_role()
###Output
_____no_output_____
###Markdown
Prepare the training dataThe [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a subset of the [80 million tiny images dataset](https://people.csail.mit.edu/torralba/tinyimages). It consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. Download the dataFirst we download the dataset:
###Code
%%bash
wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
tar xfvz cifar-10-python.tar.gz
mkdir data
mv cifar-10-batches-py data/.
rm cifar-10-python.tar.gz
###Output
_____no_output_____
###Markdown
After downloading the dataset, we use the [`torchvision.datasets` module](https://pytorch.org/docs/stable/torchvision/datasets.html) to load the CIFAR-10 dataset, utilizing the [`torchvision.transforms` module](https://pytorch.org/docs/stable/torchvision/transforms.html) to convert the data into normalized tensor images:
###Code
from cifar_utils import classes, show_img, train_data_loader, test_data_loader
train_loader = train_data_loader()
test_loader = test_data_loader()
###Output
_____no_output_____
###Markdown
Preview the dataNow we can view some of data we have prepared:
###Code
import numpy as np
import torchvision, torch
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
show_img(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%9s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Upload the dataWe use the `sagemaker.s3.S3Uploader` to upload our dataset to Amazon S3. The return value `inputs` identifies the location -- we use this later for the training job.
###Code
from sagemaker.s3 import S3Uploader
inputs = S3Uploader.upload('data', 's3://{}/{}/data'.format(bucket, prefix))
###Output
_____no_output_____
###Markdown
Prepare the entry-point scriptWhen SageMaker trains and hosts our model, it runs a Python script that we provide. (This is run as the entry point of a Docker container.) For training, this script contains the PyTorch code needed for the model to learn from our dataset. For inference, the code is for loading the model and processing the prediction input. For convenience, we put both the training and inference code in the same file. TrainingThe training code is very similar to a training script we might run outside of Amazon SageMaker, but we can access useful properties about the training environment through various environment variables. For this notebook, our script retrieves the following environment variable values:* `SM_HOSTS`: a list of hosts on the container network.* `SM_CURRENT_HOST`: the name of the current container on the container network.* `SM_MODEL_DIR`: the location for model artifacts. This directory is uploaded to Amazon S3 at the end of the training job.* `SM_CHANNEL_TRAINING`: the location of our training data.* `SM_NUM_GPUS`: the number of GPUs available to the current container.We also use a main guard (`if __name__=='__main__':`) to ensure that our training code is executed only for training, as SageMaker imports the entry-point script.For more about writing a PyTorch training script with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlprepare-a-pytorch-training-script). InferenceFor inference, we need to implement a few specific functions to tell SageMaker how to load our model and handle prediction input.* `model_fn(model_dir)`: loads the model from disk. This function must be implemented.* `input_fn(serialized_input_data, content_type)`: deserializes the prediction input.* `predict_fn(input_data, model)`: calls the model on the deserialized data.* `output_fn(prediction_output, accept)`: serializes the prediction output.The last three functions - `input_fn`, `predict_fn`, and `output_fn` - are optional because SageMaker has default implementations to handle common content types. However, there is no default implementation of `model_fn` for PyTorch models on SageMaker, so our script has to implement `model_fn`.For more about PyTorch inference with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlid3). Put it all togetherHere is the full script for both training and hosting our convolutional neural network:
###Code
!pygmentize source/cifar10.py
###Output
_____no_output_____
###Markdown
Run a SageMaker training jobThe SageMaker Python SDK makes it easy for us to interact with SageMaker. Here, we use the `PyTorch` estimator class to start a training job. We configure it with the following parameters:* `entry_point`: our training script.* `role`: an IAM role that SageMaker uses to access training and model data.* `framework_version`: the PyTorch version we wish to use. For a list of supported versions, see [here](https://github.com/aws/sagemaker-python-sdkpytorch-sagemaker-estimators).* `train_instance_count`: the number of training instances.* `train_instance_type`: the training instance type. For a list of supported instance types, see [the AWS Documentation](https://aws.amazon.com/sagemaker/pricing/instance-types/).Once we our `PyTorch` estimator, we start a training job by calling `fit()` and passing the training data we uploaded to S3 earlier.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='source/cifar10.py',
role=role,
framework_version='1.4.0',
train_instance_count=1,
train_instance_type='ml.c5.xlarge')
estimator.fit(inputs)
###Output
_____no_output_____
###Markdown
Deploy the model for inferenceAfter we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint:
###Code
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Invoke the endpointWe then use the returned `predictor` object to invoke our endpoint. For demonstration purposes, we also print out the image, its original label, and its predicted label.
###Code
# get some test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# print images, labels, and predictions
show_img(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%4s' % classes[labels[j]] for j in range(4)))
outputs = predictor.predict(images.numpy())
_, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1)
print('Predicted: ', ' '.join('%4s' % classes[predicted[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
CleanupOnce finished, we delete our endpoint to release the instances (and avoid incurring extra costs).
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Training and Hosting a PyTorch model in Amazon SageMaker*(This notebook was tested with the "Python 3 (PyTorch CPU Optimized)" kernel.)*Amazon SageMaker is a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including PyTorch.In this notebook, we use Amazon SageMaker to train a convolutional neural network using PyTorch and the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and then we host the model in Amazon SageMaker for inference. SetupLet's start by specifying:- An Amazon S3 bucket and prefix for training and model data. This should be in the same region used for SageMaker Studio, training, and hosting.- An IAM role for SageMaker to access to your training and model data. If you wish to use a different role than the one set up for SageMaker Studio, replace `sagemaker.get_execution_role()` with the appropriate IAM role or ARN. For more about using IAM roles with SageMaker, see [the AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'pytorch-cnn-cifar10-example'
role = sagemaker.get_execution_role()
###Output
_____no_output_____
###Markdown
Prepare the training dataThe [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a subset of the [80 million tiny images dataset](https://people.csail.mit.edu/torralba/tinyimages). It consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. Download the dataFirst we download the dataset:
###Code
%%bash
wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
tar xfvz cifar-10-python.tar.gz
mkdir data
mv cifar-10-batches-py data/.
rm cifar-10-python.tar.gz
###Output
_____no_output_____
###Markdown
After downloading the dataset, we use the [`torchvision.datasets` module](https://pytorch.org/docs/stable/torchvision/datasets.html) to load the CIFAR-10 dataset, utilizing the [`torchvision.transforms` module](https://pytorch.org/docs/stable/torchvision/transforms.html) to convert the data into normalized tensor images:
###Code
from cifar_utils import classes, show_img, train_data_loader, test_data_loader
train_loader = train_data_loader()
test_loader = test_data_loader()
###Output
_____no_output_____
###Markdown
Preview the dataNow we can view some of data we have prepared:
###Code
import numpy as np
import torchvision, torch
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
show_img(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%9s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Upload the dataWe use the `sagemaker.s3.S3Uploader` to upload our dataset to Amazon S3. The return value `inputs` identifies the location -- we use this later for the training job.
###Code
from sagemaker.s3 import S3Uploader
inputs = S3Uploader.upload('data', 's3://{}/{}/data'.format(bucket, prefix))
###Output
_____no_output_____
###Markdown
Prepare the entry-point scriptWhen SageMaker trains and hosts our model, it runs a Python script that we provide. (This is run as the entry point of a Docker container.) For training, this script contains the PyTorch code needed for the model to learn from our dataset. For inference, the code is for loading the model and processing the prediction input. For convenience, we put both the training and inference code in the same file. TrainingThe training code is very similar to a training script we might run outside of Amazon SageMaker, but we can access useful properties about the training environment through various environment variables. For this notebook, our script retrieves the following environment variable values:* `SM_HOSTS`: a list of hosts on the container network.* `SM_CURRENT_HOST`: the name of the current container on the container network.* `SM_MODEL_DIR`: the location for model artifacts. This directory is uploaded to Amazon S3 at the end of the training job.* `SM_CHANNEL_TRAINING`: the location of our training data.* `SM_NUM_GPUS`: the number of GPUs available to the current container.We also use a main guard (`if __name__=='__main__':`) to ensure that our training code is executed only for training, as SageMaker imports the entry-point script.For more about writing a PyTorch training script with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlprepare-a-pytorch-training-script). InferenceFor inference, we need to implement a few specific functions to tell SageMaker how to load our model and handle prediction input.* `model_fn(model_dir)`: loads the model from disk. This function must be implemented.* `input_fn(serialized_input_data, content_type)`: deserializes the prediction input.* `predict_fn(input_data, model)`: calls the model on the deserialized data.* `output_fn(prediction_output, accept)`: serializes the prediction output.The last three functions - `input_fn`, `predict_fn`, and `output_fn` - are optional because SageMaker has default implementations to handle common content types. However, there is no default implementation of `model_fn` for PyTorch models on SageMaker, so our script has to implement `model_fn`.For more about PyTorch inference with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlid3). Put it all togetherHere is the full script for both training and hosting our convolutional neural network:
###Code
!pygmentize source/cifar10.py
###Output
_____no_output_____
###Markdown
Run a SageMaker training jobThe SageMaker Python SDK makes it easy for us to interact with SageMaker. Here, we use the `PyTorch` estimator class to start a training job. We configure it with the following parameters:* `entry_point`: our training script.* `role`: an IAM role that SageMaker uses to access training and model data.* `framework_version`: the PyTorch version we wish to use. For a list of supported versions, see [here](https://github.com/aws/sagemaker-python-sdkpytorch-sagemaker-estimators).* `train_instance_count`: the number of training instances.* `train_instance_type`: the training instance type. For a list of supported instance types, see [the AWS Documentation](https://aws.amazon.com/sagemaker/pricing/instance-types/).Once we our `PyTorch` estimator, we start a training job by calling `fit()` and passing the training data we uploaded to S3 earlier.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='source/cifar10.py',
role=role,
framework_version='1.4.0',
train_instance_count=1,
train_instance_type='ml.c5.xlarge')
estimator.fit(inputs)
###Output
_____no_output_____
###Markdown
Deploy the model for inferenceAfter we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint:
###Code
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Invoke the endpointWe then use the returned `predictor` object to invoke our endpoint. For demonstration purposes, we also print out the image, its original label, and its predicted label.
###Code
# get some test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# print images, labels, and predictions
show_img(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%4s' % classes[labels[j]] for j in range(4)))
outputs = predictor.predict(images.numpy())
_, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1)
print('Predicted: ', ' '.join('%4s' % classes[predicted[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
CleanupOnce finished, we delete our endpoint to release the instances (and avoid incurring extra costs).
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Training and Hosting a PyTorch model in Amazon SageMaker*(This notebook was tested with the "Python 3 (PyTorch CPU Optimized)" kernel.)*Amazon SageMaker is a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including PyTorch.In this notebook, we use Amazon SageMaker to train a convolutional neural network using PyTorch and the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and then we host the model in Amazon SageMaker for inference. SetupLet's start by specifying:- An Amazon S3 bucket and prefix for training and model data. This should be in the same region used for SageMaker Studio, training, and hosting.- An IAM role for SageMaker to access to your training and model data. If you wish to use a different role than the one set up for SageMaker Studio, replace `sagemaker.get_execution_role()` with the appropriate IAM role or ARN. For more about using IAM roles with SageMaker, see [the AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "pytorch-cnn-cifar10-example"
role = sagemaker.get_execution_role()
###Output
_____no_output_____
###Markdown
Prepare the training dataThe [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a subset of the [80 million tiny images dataset](https://people.csail.mit.edu/torralba/tinyimages). It consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. Download the dataFirst we download the dataset:
###Code
%%bash
wget http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
tar xfvz cifar-10-python.tar.gz
mkdir data
mv cifar-10-batches-py data/.
rm cifar-10-python.tar.gz
###Output
_____no_output_____
###Markdown
After downloading the dataset, we use the [`torchvision.datasets` module](https://pytorch.org/docs/stable/torchvision/datasets.html) to load the CIFAR-10 dataset, utilizing the [`torchvision.transforms` module](https://pytorch.org/docs/stable/torchvision/transforms.html) to convert the data into normalized tensor images:
###Code
from cifar_utils import classes, show_img, train_data_loader, test_data_loader
train_loader = train_data_loader()
test_loader = test_data_loader()
###Output
_____no_output_____
###Markdown
Preview the dataNow we can view some of data we have prepared:
###Code
import numpy as np
import torchvision, torch
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
show_img(torchvision.utils.make_grid(images))
# print labels
print(" ".join("%9s" % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Upload the dataWe use the `sagemaker.s3.S3Uploader` to upload our dataset to Amazon S3. The return value `inputs` identifies the location -- we use this later for the training job.
###Code
from sagemaker.s3 import S3Uploader
inputs = S3Uploader.upload("data", "s3://{}/{}/data".format(bucket, prefix))
###Output
_____no_output_____
###Markdown
Prepare the entry-point scriptWhen SageMaker trains and hosts our model, it runs a Python script that we provide. (This is run as the entry point of a Docker container.) For training, this script contains the PyTorch code needed for the model to learn from our dataset. For inference, the code is for loading the model and processing the prediction input. For convenience, we put both the training and inference code in the same file. TrainingThe training code is very similar to a training script we might run outside of Amazon SageMaker, but we can access useful properties about the training environment through various environment variables. For this notebook, our script retrieves the following environment variable values:* `SM_HOSTS`: a list of hosts on the container network.* `SM_CURRENT_HOST`: the name of the current container on the container network.* `SM_MODEL_DIR`: the location for model artifacts. This directory is uploaded to Amazon S3 at the end of the training job.* `SM_CHANNEL_TRAINING`: the location of our training data.* `SM_NUM_GPUS`: the number of GPUs available to the current container.We also use a main guard (`if __name__=='__main__':`) to ensure that our training code is executed only for training, as SageMaker imports the entry-point script.For more about writing a PyTorch training script with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlprepare-a-pytorch-training-script). InferenceFor inference, we need to implement a few specific functions to tell SageMaker how to load our model and handle prediction input.* `model_fn(model_dir)`: loads the model from disk. This function must be implemented.* `input_fn(serialized_input_data, content_type)`: deserializes the prediction input.* `predict_fn(input_data, model)`: calls the model on the deserialized data.* `output_fn(prediction_output, accept)`: serializes the prediction output.The last three functions - `input_fn`, `predict_fn`, and `output_fn` - are optional because SageMaker has default implementations to handle common content types. However, there is no default implementation of `model_fn` for PyTorch models on SageMaker, so our script has to implement `model_fn`.For more about PyTorch inference with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlid3). Put it all togetherHere is the full script for both training and hosting our convolutional neural network:
###Code
!pygmentize source/cifar10.py
###Output
_____no_output_____
###Markdown
Run a SageMaker training jobThe SageMaker Python SDK makes it easy for us to interact with SageMaker. Here, we use the `PyTorch` estimator class to start a training job. We configure it with the following parameters:* `entry_point`: our training script.* `role`: an IAM role that SageMaker uses to access training and model data.* `framework_version`: the PyTorch version we wish to use. For a list of supported versions, see [here](https://github.com/aws/sagemaker-python-sdkpytorch-sagemaker-estimators).* `instance_count`: the number of training instances.* `instance_type`: the training instance type. For a list of supported instance types, see [the AWS Documentation](https://aws.amazon.com/sagemaker/pricing/instance-types/).Once we our `PyTorch` estimator, we start a training job by calling `fit()` and passing the training data we uploaded to S3 earlier.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(
entry_point="source/cifar10.py",
role=role,
framework_version="1.4.0",
py_version="py3",
instance_count=1,
instance_type="ml.c5.xlarge",
)
estimator.fit(inputs)
###Output
_____no_output_____
###Markdown
Deploy the model for inferenceAfter we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint:
###Code
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
###Output
_____no_output_____
###Markdown
Invoke the endpointWe then use the returned `predictor` object to invoke our endpoint. For demonstration purposes, we also print out the image, its original label, and its predicted label.
###Code
# get some test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# print images, labels, and predictions
show_img(torchvision.utils.make_grid(images))
print("GroundTruth: ", " ".join("%4s" % classes[labels[j]] for j in range(4)))
outputs = predictor.predict(images.numpy())
_, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1)
print("Predicted: ", " ".join("%4s" % classes[predicted[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
CleanupOnce finished, we delete our endpoint to release the instances (and avoid incurring extra costs).
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Training and Hosting a PyTorch model in Amazon SageMaker*(This notebook was tested with the "Python 3 (PyTorch CPU Optimized)" kernel.)*Amazon SageMaker is a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including PyTorch.In this notebook, we use Amazon SageMaker to train a convolutional neural network using PyTorch and the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and then we host the model in Amazon SageMaker for inference. SetupLet's start by specifying:- An Amazon S3 bucket and prefix for training and model data. This should be in the same region used for SageMaker Studio, training, and hosting.- An IAM role for SageMaker to access to your training and model data. If you wish to use a different role than the one set up for SageMaker Studio, replace `sagemaker.get_execution_role()` with the appropriate IAM role or ARN. For more about using IAM roles with SageMaker, see [the AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "pytorch-cnn-cifar10-example"
role = sagemaker.get_execution_role()
###Output
_____no_output_____
###Markdown
Prepare the training dataThe [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is a subset of the [80 million tiny images dataset](https://people.csail.mit.edu/torralba/tinyimages). It consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. Download the dataFirst we download the dataset:
###Code
%%bash
wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
tar xfvz cifar-10-python.tar.gz
mkdir data
mv cifar-10-batches-py data/.
rm cifar-10-python.tar.gz
###Output
_____no_output_____
###Markdown
After downloading the dataset, we use the [`torchvision.datasets` module](https://pytorch.org/docs/stable/torchvision/datasets.html) to load the CIFAR-10 dataset, utilizing the [`torchvision.transforms` module](https://pytorch.org/docs/stable/torchvision/transforms.html) to convert the data into normalized tensor images:
###Code
from cifar_utils import classes, show_img, train_data_loader, test_data_loader
train_loader = train_data_loader()
test_loader = test_data_loader()
###Output
_____no_output_____
###Markdown
Preview the dataNow we can view some of data we have prepared:
###Code
import numpy as np
import torchvision, torch
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# show images
show_img(torchvision.utils.make_grid(images))
# print labels
print(" ".join("%9s" % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Upload the dataWe use the `sagemaker.s3.S3Uploader` to upload our dataset to Amazon S3. The return value `inputs` identifies the location -- we use this later for the training job.
###Code
from sagemaker.s3 import S3Uploader
inputs = S3Uploader.upload("data", "s3://{}/{}/data".format(bucket, prefix))
###Output
_____no_output_____
###Markdown
Prepare the entry-point scriptWhen SageMaker trains and hosts our model, it runs a Python script that we provide. (This is run as the entry point of a Docker container.) For training, this script contains the PyTorch code needed for the model to learn from our dataset. For inference, the code is for loading the model and processing the prediction input. For convenience, we put both the training and inference code in the same file. TrainingThe training code is very similar to a training script we might run outside of Amazon SageMaker, but we can access useful properties about the training environment through various environment variables. For this notebook, our script retrieves the following environment variable values:* `SM_HOSTS`: a list of hosts on the container network.* `SM_CURRENT_HOST`: the name of the current container on the container network.* `SM_MODEL_DIR`: the location for model artifacts. This directory is uploaded to Amazon S3 at the end of the training job.* `SM_CHANNEL_TRAINING`: the location of our training data.* `SM_NUM_GPUS`: the number of GPUs available to the current container.We also use a main guard (`if __name__=='__main__':`) to ensure that our training code is executed only for training, as SageMaker imports the entry-point script.For more about writing a PyTorch training script with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlprepare-a-pytorch-training-script). InferenceFor inference, we need to implement a few specific functions to tell SageMaker how to load our model and handle prediction input.* `model_fn(model_dir)`: loads the model from disk. This function must be implemented.* `input_fn(serialized_input_data, content_type)`: deserializes the prediction input.* `predict_fn(input_data, model)`: calls the model on the deserialized data.* `output_fn(prediction_output, accept)`: serializes the prediction output.The last three functions - `input_fn`, `predict_fn`, and `output_fn` - are optional because SageMaker has default implementations to handle common content types. However, there is no default implementation of `model_fn` for PyTorch models on SageMaker, so our script has to implement `model_fn`.For more about PyTorch inference with SageMaker, please see the [SageMaker documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.htmlid3). Put it all togetherHere is the full script for both training and hosting our convolutional neural network:
###Code
!pygmentize source/cifar10.py
###Output
_____no_output_____
###Markdown
Run a SageMaker training jobThe SageMaker Python SDK makes it easy for us to interact with SageMaker. Here, we use the `PyTorch` estimator class to start a training job. We configure it with the following parameters:* `entry_point`: our training script.* `role`: an IAM role that SageMaker uses to access training and model data.* `framework_version`: the PyTorch version we wish to use. For a list of supported versions, see [here](https://github.com/aws/sagemaker-python-sdkpytorch-sagemaker-estimators).* `train_instance_count`: the number of training instances.* `train_instance_type`: the training instance type. For a list of supported instance types, see [the AWS Documentation](https://aws.amazon.com/sagemaker/pricing/instance-types/).Once we our `PyTorch` estimator, we start a training job by calling `fit()` and passing the training data we uploaded to S3 earlier.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(
entry_point="source/cifar10.py",
role=role,
framework_version="1.4.0",
train_instance_count=1,
train_instance_type="ml.c5.xlarge",
)
estimator.fit(inputs)
###Output
_____no_output_____
###Markdown
Deploy the model for inferenceAfter we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint:
###Code
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
###Output
_____no_output_____
###Markdown
Invoke the endpointWe then use the returned `predictor` object to invoke our endpoint. For demonstration purposes, we also print out the image, its original label, and its predicted label.
###Code
# get some test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# print images, labels, and predictions
show_img(torchvision.utils.make_grid(images))
print("GroundTruth: ", " ".join("%4s" % classes[labels[j]] for j in range(4)))
outputs = predictor.predict(images.numpy())
_, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1)
print("Predicted: ", " ".join("%4s" % classes[predicted[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
CleanupOnce finished, we delete our endpoint to release the instances (and avoid incurring extra costs).
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
src/machine_learning_src/ML_parameter_optimization.ipynb | ###Markdown
Setup
###Code
import random
import collections
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn import metrics
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
from PIL import Image
import matplotlib.pyplot as plt
import time
import multiprocessing
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
# Hyperparameters
hps = {
'seed': 42,
'augment': False
}
###Output
_____no_output_____
###Markdown
Dataset
###Code
def load_file(filename):
file = open(filename)
lines = file.readlines()
data = []
for line in lines:
arr = []
for num in line.split(","):
arr.append(float(num))
data.append(arr)
return np.array(data)
def data_augment(spectra, augmentation, noiselevel, apply_peak_shift):
spectra_augment = np.empty(spectra.shape, dtype=float, order='C')
for idx in range(len(spectra)):
spec = spectra[idx].reshape(1, -1)
max_Intensity = np.max(spec, axis = 1)
if augmentation:
if random.uniform(0, 1) > 0: # noise
noise = np.random.normal(0, max_Intensity * noiselevel, spec.shape[1]).astype(np.float32)
spec = spec + noise
if apply_peak_shift:
if random.uniform(0, 1) > 0.5: # Shift
temp = np.zeros(spec.shape).astype(np.float32)
shift_count = np.random.randint(1, 30)
if random.uniform(0, 1) > 0.5: # Shift right
temp[:, shift_count:] = spec[:, :-shift_count]
else:
temp[:, :-shift_count] = spec[:, shift_count:]
spec = temp
spec = spec.astype(np.float32)
spec = spec.flatten()
spec = spec/max_Intensity
spectra_augment[idx] = spec
return spectra_augment
# Load data
Y = []
data = []
# path = 'datasets/dielectric_dataset/'
# for i in range(4):
# curr_data = load_file(path + "dielectric" + str(i) + ".txt")
# data.append(curr_data)
# count = len(curr_data)
# Y.extend([i] * count)
path = 'datasets/charge_dataset/'
for i in range(4):
curr_data = load_file(path + "c" + str(i) + ".txt")
data.append(curr_data)
count = len(curr_data)
Y.extend([i] * count)
X = np.vstack(data)
Y = np.array(Y)
count = collections.defaultdict(int)
for num in Y:
count[num] += 1
classes_count = len(count.keys())
print("Total number of smaples: ")
print(sum(v for v in count.values()))
for i in sorted(count.keys()):
print("Class", i, "Count:", count[i])
xtrain, xtest, ytrain, ytest = train_test_split(X, Y, stratify=Y, train_size=0.8, shuffle=True)
xtrain = data_augment(xtrain, augmentation=False, noiselevel = 0, apply_peak_shift=False)
xtest = data_augment(xtest, augmentation= False, noiselevel = 0, apply_peak_shift=False)
print("X shape:\t", X.shape)
print("xtrain shape:\t", xtrain.shape)
print("xtest shape:\t", xtest.shape, '\n')
print("Y shape:\t", Y.shape)
print("ytrain shape:\t", ytrain.shape)
print("ytest shape:\t", ytest.shape)
# Naive Bayes parameter tuning
accuracy = []
var_smoothings = [20, 10, 5, 1, 0.1, 9e-2, 7e-2, 5e-2, 3e-2, 2e-2, 1e-2, 9e-3, 7e-3, 5e-3, 3e-3, 1e-3, 1e-4, 1e-5]
for var in var_smoothings:
gnb = GaussianNB(var_smoothing = var).fit(xtrain, ytrain)
tmp = gnb.score(xtest, ytest)
accuracy.append(tmp)
plt.figure(figsize = (6, 5))
ax = plt.plot(var_smoothings, np.array(accuracy)*100, linewidth=3)
plt.ylabel("Accuracy(%)")
plt.xlabel("var_smoothings (log)")
plt.xscale('log')
plt.ylim([50, 100])
plt.xlim([0.000005, 100])
plt.rcParams.update({'font.size': 14})
plt.show()
plt.savefig('NB_parameter_tuning.png', bbox_inches = 'tight')
#Decision Tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report, confusion_matrix
min_samples_splits = [0.1, 0.3, 0.5, 0.7, 0.9]
min_samples_leafs = np.linspace(0.1, 0.5, 8, endpoint=True)
accuracy = []
for split in min_samples_splits:
tmp = []
for numleaf in min_samples_leafs:
classifier = DecisionTreeClassifier(criterion = "entropy", random_state = 0,
max_depth = 3, min_samples_leaf = numleaf, min_samples_split = split)
classifier.fit(xtrain, ytrain)
ypred = classifier.predict(xtest)
res = accuracy_score(ytest, ypred)
tmp.append(res)
accuracy.append(tmp)
results = []
results.append(min_samples_splits)
results.append(min_samples_leafs)
results.append(accuracy)
plt.figure(figsize = (6, 5))
for i in range(len(min_samples_splits)):
acc = np.array(accuracy[i]) * 100
plt.plot(min_samples_leafs, acc, label="min_samples_splits:" +str(min_samples_splits[i]), linewidth=3)
plt.legend()
plt.ylim([10, 100])
plt.ylabel("Accuracy(%)")
plt.xlabel("min_samples_leafs")
plt.rcParams.update({'font.size': 14})
# plt.gca().yaxis.set_major_formatter(StrMethodFormatter('{x:,.2f}')) # 2 decimal places
plt.show()
plt.savefig('decisiontree_parameter_tuning.png', bbox_inches = 'tight')
# SVM parameter tuning
C = [3, 9, 27, 54, 81, 150]
gammas = ['scale', 'auto'] #the results shows 'auto' gamma is really bad, we should use scale
kernels = ['linear', 'rbf', 'poly']
accuracy = []
for k in kernels:
tmp = []
for c in C:
svm_model = SVC(kernel = k, C = c).fit(xtrain, ytrain)
res = svm_model.score(xtest, ytest)
tmp.append(res)
accuracy.append(tmp)
plt.figure(figsize = (6, 5))
for i in range(len(kernels)):
acc = np.array(accuracy[i]) * 100
plt.plot(C, acc, label="Kernel: " + str(kernels[i]), linewidth=3)
plt.legend()
plt.ylim([95, 100])
plt.ylabel("Accuracy(%)")
plt.xlabel("Penalty")
plt.show()
plt.savefig('SVM_parameter_tuning.png', bbox_inches = 'tight')
# Random forest
n_estimators = [100, 150, 200, 300, 400]
max_features = 2
max_depth = [5, 6, 7, 10, 15, 20]
accuracy = []
for d in max_depth:
tmp = []
for est in n_estimators:
randomForest = RandomForestClassifier(n_estimators=est, max_depth =d, random_state=0)
randomForest.fit(xtrain, ytrain)
ypred = randomForest.predict(xtest)
res = metrics.accuracy_score(ytest, ypred)
tmp.append(res)
accuracy.append(tmp)
plt.figure(figsize = (6, 5))
for i in range(len(max_depth)):
acc = np.array(accuracy[i]) * 100
plt.plot(n_estimators, acc, label="Max_depth: " + str(max_depth[i]))
plt.legend()
plt.ylabel("Accuracy (%)")
plt.xlabel("Number of estimators")
plt.ylim([97, 100])
plt.rcParams.update({'font.size': 14})
plt.show()
plt.savefig('randomForest_parameter_tuning.png', bbox_inches = 'tight')
# KNN parameter optimization
accuracy = []
neighbors = range(4, 100, 3)
for i in range(4):
tmp = []
for n_neigh in neighbors:
dist_order = i + 1
classifier = KNeighborsClassifier(n_neighbors=n_neigh, p = dist_order)
classifier.fit(xtrain, ytrain)
ypred = classifier.predict(xtest)
res = accuracy_score(ytest, ypred)
tmp.append(res)
accuracy.append(tmp)
results = []
results.append(neighbors)
results.append(accuracy)
torch.save(results, 'KNN_parameter_tuning')
accs = torch.load('KNN_parameter_tuning_20220113')
neighbors, accuracy = accs
plt.figure(figsize=(6, 5))
for i in range(4):
acc = np.array(accuracy[i]) * 100
plt.plot(neighbors, acc, label="Distance_order: " + str(i + 1), linewidth = 3)
plt.legend()
plt.ylim([90, 100])
plt.ylabel("Accuracy (%)")
plt.xlabel("Number of neighbors")
plt.rcParams.update({'font.size': 14})
plt.show()
###Output
_____no_output_____ |
plywood_gallery/quickstart_template/template_html_build.ipynb | ###Markdown
Second step of setting up your plywood-gallery! 🪵🪓Now we want to set up the `index.html` document. An html template is shipped with the plywood gallery as a jinja2 template. And now it is time to set up your project details. This needs two steps:1. go to `gallery_config.yaml` and paste all your project information in the yaml format. 2. come back to this notebook and run the `generate_html_from_jinja2_and_json` command in the below cell.Note: From time to time, a new html template will be shipped with the plywood gallery. In this case you want to update, just run the command `generate_html_from_jinja2_and_yaml` again, and you don't have to setup things again.But why do we use yaml and not json? yaml is the better format for providing html snippets, as one can both use the " and the ' characters in strings.
###Code
from plywood_gallery import generate_html_from_jinja2_and_yaml
generate_html_from_jinja2_and_yaml(yaml_file="html_configuration.yaml", index_html_file="index.html")
# from plywood_gallery import open_webpage
# open_webpage()
###Output
_____no_output_____ |
examples/colab/action_recognition_with_tf_hub.ipynb | ###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
TF-Hub Action Recognition Model Run in Google Colab View source on GitHub This Colab demonstrates use of action recognition from video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module.The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 diferrent actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setting up the environment
###Code
!pip install -q imageio
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
tf.disable_v2_behavior()
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "http://crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def animate(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
with open('./animation.gif','rb') as f:
display.display(display.Image(data=f.read(), height=300))
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
sample_video = load_video(fetch_ucf_video("v_CricketShot_g04_c02.avi"))
print("sample_video is a numpy array of shape %s." % str(sample_video.shape))
animate(sample_video)
# Run the i3d model on the video and print the top 5 actions.
# First add an empty dimension to the sample video as the model takes as input
# a batch of videos.
model_input = np.expand_dims(sample_video, axis=0)
# Create the i3d model and get the action probabilities.
with tf.Graph().as_default():
i3d = hub.Module("https://tfhub.dev/deepmind/i3d-kinetics-400/1")
input_placeholder = tf.placeholder(shape=(None, None, 224, 224, 3), dtype=tf.float32)
logits = i3d(input_placeholder)
probabilities = tf.nn.softmax(logits)
with tf.train.MonitoredSession() as session:
[ps] = session.run(probabilities,
feed_dict={input_placeholder: model_input})
print("Top 5 actions:")
for i in np.argsort(ps)[::-1][:5]:
print("%-22s %.2f%%" % (labels[i], ps[i] * 100))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
TF-Hub Action Recognition Model Run in Google Colab View source on GitHub This Colab demonstrates use of action recognition from video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module.The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 diferrent actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setting up the environment
###Code
# Install the necessary python packages.
!pip install -q "tensorflow>=1.7" "tensorflow-hub" "imageio"
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "http://crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def animate(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
with open('./animation.gif','rb') as f:
display.display(display.Image(data=f.read(), height=300))
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
sample_video = load_video(fetch_ucf_video("v_CricketShot_g04_c02.avi"))
print("sample_video is a numpy array of shape %s." % str(sample_video.shape))
animate(sample_video)
# Run the i3d model on the video and print the top 5 actions.
# First add an empty dimension to the sample video as the model takes as input
# a batch of videos.
model_input = np.expand_dims(sample_video, axis=0)
# Create the i3d model and get the action probabilities.
with tf.Graph().as_default():
i3d = hub.Module("https://tfhub.dev/deepmind/i3d-kinetics-400/1")
input_placeholder = tf.placeholder(shape=(None, None, 224, 224, 3), dtype=tf.float32)
logits = i3d(input_placeholder)
probabilities = tf.nn.softmax(logits)
with tf.train.MonitoredSession() as session:
[ps] = session.run(probabilities,
feed_dict={input_placeholder: model_input})
print("Top 5 actions:")
for i in np.argsort(ps)[::-1][:5]:
print("%-22s %.2f%%" % (labels[i], ps[i] * 100))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Action Recognition with an Inflated 3D CNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This Colab demonstrates use of action recognition from video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module.The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 different actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setup
###Code
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
###Output
_____no_output_____
###Markdown
Run the id3 model and print the top-5 action predictions.
###Code
def predict(sample_video):
# Add a batch axis to the to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
###Output
_____no_output_____
###Markdown
Now try a new video, from: https://commons.wikimedia.org/wiki/Category:Videos_of_sportsHow about [this video](https://commons.wikimedia.org/wiki/File:End_of_a_jam.ogv) by Patrick Gillett:
###Code
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Action Recognition with an Inflated 3D CNN View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub model This Colab demonstrates recognizing actions in video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module. More models to detect actions in videos can be found [here](https://tfhub.dev/s?module-type=video-classification).The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 different actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setup
###Code
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
###Output
_____no_output_____
###Markdown
Run the id3 model and print the top-5 action predictions.
###Code
def predict(sample_video):
# Add a batch axis to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
###Output
_____no_output_____
###Markdown
Now try a new video, from: https://commons.wikimedia.org/wiki/Category:Videos_of_sportsHow about [this video](https://commons.wikimedia.org/wiki/File:End_of_a_jam.ogv) by Patrick Gillett:
###Code
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Action Recognition with an Inflated 3D CNN View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub model This Colab demonstrates recognizing actions in video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module. More models to detect actions in videos can be found [here](https://tfhub.dev/s?module-type=video-classification).The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 different actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setup
###Code
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
###Output
_____no_output_____
###Markdown
Run the id3 model and print the top-5 action predictions.
###Code
def predict(sample_video):
# Add a batch axis to the to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
###Output
_____no_output_____
###Markdown
Now try a new video, from: https://commons.wikimedia.org/wiki/Category:Videos_of_sportsHow about [this video](https://commons.wikimedia.org/wiki/File:End_of_a_jam.ogv) by Patrick Gillett:
###Code
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Action Recognition with an Inflated 3D CNN View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub model This Colab demonstrates recognizing actions in video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module. More models to detect actions in videos can be found [here](https://tfhub.dev/s?module-type=video-classification).The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://www.deepmind.com/open-source/kinetics)and knows about 400 different actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setup
###Code
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
###Output
_____no_output_____
###Markdown
Run the id3 model and print the top-5 action predictions.
###Code
def predict(sample_video):
# Add a batch axis to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
###Output
_____no_output_____
###Markdown
Now try a new video, from: https://commons.wikimedia.org/wiki/Category:Videos_of_sportsHow about [this video](https://commons.wikimedia.org/wiki/File:End_of_a_jam.ogv) by Patrick Gillett:
###Code
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
TF-Hub Action Recognition Model Run in Google Colab View source on GitHub This Colab demonstrates use of action recognition from video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module.The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 diferrent actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setting up the environment
###Code
# Install the necessary python packages.
!pip install -q "tensorflow>=1.7" "tensorflow-hub" "imageio"
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
import tensorflow as tf
import tensorflow_hub as hub
tf.logging.set_verbosity(tf.logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "http://crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def animate(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
with open('./animation.gif','rb') as f:
display.display(display.Image(data=f.read(), height=300))
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
sample_video = load_video(fetch_ucf_video("v_CricketShot_g04_c02.avi"))
print("sample_video is a numpy array of shape %s." % str(sample_video.shape))
animate(sample_video)
# Run the i3d model on the video and print the top 5 actions.
# First add an empty dimension to the sample video as the model takes as input
# a batch of videos.
model_input = np.expand_dims(sample_video, axis=0)
# Create the i3d model and get the action probabilities.
with tf.Graph().as_default():
i3d = hub.Module("https://tfhub.dev/deepmind/i3d-kinetics-400/1")
input_placeholder = tf.placeholder(shape=(None, None, 224, 224, 3), dtype=tf.float32)
logits = i3d(input_placeholder)
probabilities = tf.nn.softmax(logits)
with tf.train.MonitoredSession() as session:
[ps] = session.run(probabilities,
feed_dict={input_placeholder: model_input})
print("Top 5 actions:")
for i in np.argsort(ps)[::-1][:5]:
print("%-22s %.2f%%" % (labels[i], ps[i] * 100))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
TF-Hub Action Recognition Model View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This Colab demonstrates use of action recognition from video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module.The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 diferrent actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setup
###Code
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "http://crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
to_gif(sample_video)
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
###Output
_____no_output_____
###Markdown
Run the id3 model and print the top-5 action predictions.
###Code
# Add a batch axis to the to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
Action Recognition with an Inflated 3D CNN View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This Colab demonstrates recognizing actions in video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module. More models to detect actions in videos can be found [here](https://tfhub.dev/s?module-type=video-classification).The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 different actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setup
###Code
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
###Output
_____no_output_____
###Markdown
Run the id3 model and print the top-5 action predictions.
###Code
def predict(sample_video):
# Add a batch axis to the to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
###Output
_____no_output_____
###Markdown
Now try a new video, from: https://commons.wikimedia.org/wiki/Category:Videos_of_sportsHow about [this video](https://commons.wikimedia.org/wiki/File:End_of_a_jam.ogv) by Patrick Gillett:
###Code
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
TF-Hub Action Recognition Model Run in Google Colab View source on GitHub This Colab demonstrates use of action recognition from video data using the[tfhub.dev/deepmind/i3d-kinetics-400/1](https://tfhub.dev/deepmind/i3d-kinetics-400/1) module.The underlying model is described in the paper "[Quo Vadis, Action Recognition? A NewModel and the Kinetics Dataset](https://arxiv.org/abs/1705.07750)" by JoaoCarreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, andwas published as a CVPR 2017 conference paper.The source code is publicly available on[github](https://github.com/deepmind/kinetics-i3d)."Quo Vadis" introduced a new architecture for video classification, the Inflated3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kineticsalso placed first in the CVPR 2017 [Charades challenge](http://vuchallenge.org/charades.html).The original module was trained on the [kinetics-400 dateset](https://deepmind.com/research/open-source/open-source-datasets/kinetics/)and knows about 400 diferrent actions.Labels for these actions can be found in the[label map file](https://github.com/deepmind/kinetics-i3d/blob/master/data/label_map.txt).In this Colab we will use it recognize activites in videos from a UCF101 dataset. Setting up the environment
###Code
!pip install -q imageio
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "http://crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
def list_ucf_videos():
"""Lists videos available in UCF101 dataset."""
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
"""Fetchs a video and cache into local filesystem."""
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def animate(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
with open('./animation.gif','rb') as f:
display.display(display.Image(data=f.read(), height=300))
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
###Output
_____no_output_____
###Markdown
Using the UCF101 dataset
###Code
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
sample_video = load_video(fetch_ucf_video("v_CricketShot_g04_c02.avi"))
print("sample_video is a numpy array of shape %s." % str(sample_video.shape))
animate(sample_video)
# Run the i3d model on the video and print the top 5 actions.
# First add an empty dimension to the sample video as the model takes as input
# a batch of videos.
model_input = np.expand_dims(sample_video, axis=0)
# Create the i3d model and get the action probabilities.
with tf.Graph().as_default():
i3d = hub.Module("https://tfhub.dev/deepmind/i3d-kinetics-400/1")
input_placeholder = tf.placeholder(shape=(None, None, 224, 224, 3), dtype=tf.float32)
logits = i3d(input_placeholder)
probabilities = tf.nn.softmax(logits)
with tf.train.MonitoredSession() as session:
[ps] = session.run(probabilities,
feed_dict={input_placeholder: model_input})
print("Top 5 actions:")
for i in np.argsort(ps)[::-1][:5]:
print("%-22s %.2f%%" % (labels[i], ps[i] * 100))
###Output
_____no_output_____ |
code/fibers2graphs.ipynb | ###Markdown
1. Make Graph
###Code
from ndmg import graph
import numpy as np
import nibabel as nb
labels = '../data/atlas/desikan_s4.nii.gz'
n_rois = len(np.unique(nb.load(labels).get_data()))-1
fibs = np.load('../data/fibers/KKI2009_113_1_DTI_s4_fibers.npz')['arr_0']
generator = graph(n_rois, labels)
generator.make_graph(fibs)
generator.save_graph('../data/graph/desikan/KKI2009_113_1_DTI_s3_desikan.gpickle')
g = generator.get_graph()
###Output
{'ecount': 0, 'vcount': 70, 'region': 'brain', 'source': 'http://m2g.io', 'version': '0.0.33-1', 'date': 'Sun Oct 30 18:24:22 2016', 'sensor': 'Diffusion MRI', 'name': "Generated by NeuroData's MRI Graphs (ndmg)"}
# of Streamlines: 8848
0
###Markdown
2. Plot Graph
###Code
import matplotlib.pyplot as plt
import networkx as nx
%matplotlib inline
adj = nx.adj_matrix(g, nodelist=range(n_rois)).todense()
fig = plt.figure(figsize=(7,7))
p = plt.imshow(adj, interpolation='None')
###Output
_____no_output_____
###Markdown
3. Compute Summary Statistics
###Code
%%bash
ndmg_bids ../data/graph/ ../data/qc group
###Output
Parcellation: desikan
../data/qc/desikan
Computing: NNZ
Computing: Degree Seuqence
Computing: Edge Weight Sequence
Computing: Clustering Coefficient Sequence
Computing: Scan Statistic-1 Sequence
Computing: Eigen Value Sequence
Computing: Betweenness Centrality Sequence
###Markdown
4. Plot Summary Statistics
###Code
from IPython.display import Image
import json
with open('../data/qc/desikan/desikan_summary_info.json') as json_data:
d = json.load(json_data)
print(d)
Image(filename='../data/qc/desikan/desikan_summary.png')
###Output
{u'date': u'Sun Oct 30 18:24:39 2016', u'subjects': [u'KKI2009_113_1_DTI_s3_desikan.gpickle']}
|
JamesWilliams_WeatherPy.ipynb | ###Markdown
WeatherPy **Note**- Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
import os
# Import API key
from JamesWilliams_api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
###Output
_____no_output_____
###Markdown
Generate Cities List
###Code
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
###Output
_____no_output_____
###Markdown
Perform API Calls- Perform a weather check on each city using a series of successive API calls.- Include a print log of each city as it'sbeing processed (with the city number and city name).
###Code
# Print to logger
print(f'\nBeginning Data Retrieval\n\
-----------------------------\n')
# Create counters
record = 0
sets = 1
cities_weather = []
# Loop through all the cities in our list
for city in cities:
record += 1
print(f'Processing Record {record} of Set {sets} | {city}')
# Group cities in sets of 50 for logging purposes
if(record == 50):
record = 0
sets += 1
# URL for Weather Map API Call
url = f"http://api.openweathermap.org/data/2.5/weather?units=Imperial&q={city}&APPID={weather_api_key}"
time.sleep(1)
response = requests.get(url).json()
# try and and except to add info to lists
try:
cities_weather.append({
'City': city,
'Lat': response['coord']['lat'],
'Lon': response['coord']['lon'],
'Max Temp': response['main']['temp_max'],
'Humidity': response['main']['humidity'],
'Cloudiness': response['clouds']['all'],
'Wind Speed': response['wind']['speed'],
'Country': response['sys']['country'],
'Date': response['dt'],
})
except:
print('City not found. Skipping...')
print('-----------------------------\nData Retrieval Complete\n-----------------------------')
# #print completed results
###Output
Beginning Data Retrieval
-----------------------------
Processing Record 1 of Set 1 | abha
Processing Record 2 of Set 1 | lac du bonnet
Processing Record 3 of Set 1 | san rafael
Processing Record 4 of Set 1 | fairbanks
Processing Record 5 of Set 1 | illoqqortoormiut
City not found. Skipping...
Processing Record 6 of Set 1 | nikolskoye
Processing Record 7 of Set 1 | taolanaro
City not found. Skipping...
Processing Record 8 of Set 1 | mackay
Processing Record 9 of Set 1 | busselton
Processing Record 10 of Set 1 | torbay
Processing Record 11 of Set 1 | dinar
Processing Record 12 of Set 1 | san patricio
Processing Record 13 of Set 1 | kodiak
Processing Record 14 of Set 1 | harnosand
Processing Record 15 of Set 1 | narsaq
Processing Record 16 of Set 1 | qaanaaq
Processing Record 17 of Set 1 | tuktoyaktuk
Processing Record 18 of Set 1 | avarua
Processing Record 19 of Set 1 | mar del plata
Processing Record 20 of Set 1 | bluff
Processing Record 21 of Set 1 | leningradskiy
Processing Record 22 of Set 1 | vaini
Processing Record 23 of Set 1 | benavente
Processing Record 24 of Set 1 | arraial do cabo
Processing Record 25 of Set 1 | port elizabeth
Processing Record 26 of Set 1 | hilo
Processing Record 27 of Set 1 | gizo
Processing Record 28 of Set 1 | razole
Processing Record 29 of Set 1 | mataura
Processing Record 30 of Set 1 | lamar
Processing Record 31 of Set 1 | albany
Processing Record 32 of Set 1 | cairns
Processing Record 33 of Set 1 | castro
Processing Record 34 of Set 1 | attawapiskat
City not found. Skipping...
Processing Record 35 of Set 1 | georgetown
Processing Record 36 of Set 1 | tomatlan
Processing Record 37 of Set 1 | tursunzoda
Processing Record 38 of Set 1 | arauca
Processing Record 39 of Set 1 | san juan de colon
Processing Record 40 of Set 1 | pringsewu
Processing Record 41 of Set 1 | phnum penh
City not found. Skipping...
Processing Record 42 of Set 1 | tynda
Processing Record 43 of Set 1 | port alfred
Processing Record 44 of Set 1 | skibbereen
Processing Record 45 of Set 1 | jinchang
Processing Record 46 of Set 1 | katangli
Processing Record 47 of Set 1 | fortuna
Processing Record 48 of Set 1 | san cristobal
Processing Record 49 of Set 1 | hithadhoo
Processing Record 50 of Set 1 | thompson
Processing Record 1 of Set 2 | touros
Processing Record 2 of Set 2 | butaritari
Processing Record 3 of Set 2 | mys shmidta
City not found. Skipping...
Processing Record 4 of Set 2 | sao filipe
Processing Record 5 of Set 2 | gari
Processing Record 6 of Set 2 | northam
Processing Record 7 of Set 2 | cidreira
Processing Record 8 of Set 2 | yellowknife
Processing Record 9 of Set 2 | sentyabrskiy
City not found. Skipping...
Processing Record 10 of Set 2 | yarmouth
Processing Record 11 of Set 2 | portland
Processing Record 12 of Set 2 | jamestown
Processing Record 13 of Set 2 | westport
Processing Record 14 of Set 2 | kapaa
Processing Record 15 of Set 2 | ngunguru
Processing Record 16 of Set 2 | peace river
Processing Record 17 of Set 2 | bambous virieux
Processing Record 18 of Set 2 | barrow
Processing Record 19 of Set 2 | rikitea
Processing Record 20 of Set 2 | hanzhong
Processing Record 21 of Set 2 | dikson
Processing Record 22 of Set 2 | hobart
Processing Record 23 of Set 2 | port hedland
Processing Record 24 of Set 2 | port said
Processing Record 25 of Set 2 | new norfolk
Processing Record 26 of Set 2 | puerto ayora
Processing Record 27 of Set 2 | pedasi
Processing Record 28 of Set 2 | sitka
Processing Record 29 of Set 2 | atuona
Processing Record 30 of Set 2 | riberalta
Processing Record 31 of Set 2 | lebu
Processing Record 32 of Set 2 | san carlos de bariloche
Processing Record 33 of Set 2 | mangrol
Processing Record 34 of Set 2 | plettenberg bay
Processing Record 35 of Set 2 | labuhan
Processing Record 36 of Set 2 | norman wells
Processing Record 37 of Set 2 | springbok
Processing Record 38 of Set 2 | tsihombe
City not found. Skipping...
Processing Record 39 of Set 2 | yarada
Processing Record 40 of Set 2 | victoria point
Processing Record 41 of Set 2 | lavrentiya
Processing Record 42 of Set 2 | kavieng
Processing Record 43 of Set 2 | moche
Processing Record 44 of Set 2 | rungata
City not found. Skipping...
Processing Record 45 of Set 2 | bucerias
Processing Record 46 of Set 2 | belushya guba
City not found. Skipping...
Processing Record 47 of Set 2 | longyearbyen
Processing Record 48 of Set 2 | porto novo
Processing Record 49 of Set 2 | pecos
Processing Record 50 of Set 2 | ushuaia
Processing Record 1 of Set 3 | aguimes
Processing Record 2 of Set 3 | mawlaik
Processing Record 3 of Set 3 | provideniya
Processing Record 4 of Set 3 | zaoyang
Processing Record 5 of Set 3 | tiarei
Processing Record 6 of Set 3 | punta arenas
Processing Record 7 of Set 3 | dholka
Processing Record 8 of Set 3 | karpathos
Processing Record 9 of Set 3 | paamiut
Processing Record 10 of Set 3 | khromtau
Processing Record 11 of Set 3 | haines junction
Processing Record 12 of Set 3 | barentsburg
City not found. Skipping...
Processing Record 13 of Set 3 | aasiaat
Processing Record 14 of Set 3 | hermanus
Processing Record 15 of Set 3 | gravdal
Processing Record 16 of Set 3 | koumac
Processing Record 17 of Set 3 | tiksi
Processing Record 18 of Set 3 | acapulco
Processing Record 19 of Set 3 | palmas
Processing Record 20 of Set 3 | sur
Processing Record 21 of Set 3 | olavarria
Processing Record 22 of Set 3 | coromandel
Processing Record 23 of Set 3 | araceli
Processing Record 24 of Set 3 | luderitz
Processing Record 25 of Set 3 | dingle
Processing Record 26 of Set 3 | saryshagan
City not found. Skipping...
Processing Record 27 of Set 3 | codrington
Processing Record 28 of Set 3 | ceska kamenice
Processing Record 29 of Set 3 | vaitupu
City not found. Skipping...
Processing Record 30 of Set 3 | lumphat
Processing Record 31 of Set 3 | upernavik
Processing Record 32 of Set 3 | tarakan
Processing Record 33 of Set 3 | nizhneyansk
City not found. Skipping...
Processing Record 34 of Set 3 | samusu
City not found. Skipping...
Processing Record 35 of Set 3 | clyde river
Processing Record 36 of Set 3 | sinnamary
Processing Record 37 of Set 3 | stromness
Processing Record 38 of Set 3 | cape town
Processing Record 39 of Set 3 | juneau
Processing Record 40 of Set 3 | severo-kurilsk
Processing Record 41 of Set 3 | hopkinsville
Processing Record 42 of Set 3 | airai
Processing Record 43 of Set 3 | chumikan
Processing Record 44 of Set 3 | east london
Processing Record 45 of Set 3 | turkistan
Processing Record 46 of Set 3 | tulun
Processing Record 47 of Set 3 | bethel
Processing Record 48 of Set 3 | kasempa
Processing Record 49 of Set 3 | tshela
Processing Record 50 of Set 3 | alofi
Processing Record 1 of Set 4 | merritt island
Processing Record 2 of Set 4 | hamilton
Processing Record 3 of Set 4 | pisco
Processing Record 4 of Set 4 | coahuayana
Processing Record 5 of Set 4 | klaksvik
Processing Record 6 of Set 4 | souillac
Processing Record 7 of Set 4 | karamken
City not found. Skipping...
Processing Record 8 of Set 4 | hami
Processing Record 9 of Set 4 | bredasdorp
Processing Record 10 of Set 4 | darhan
Processing Record 11 of Set 4 | san quintin
Processing Record 12 of Set 4 | coquimbo
Processing Record 13 of Set 4 | manado
Processing Record 14 of Set 4 | komsomolskiy
Processing Record 15 of Set 4 | riyadh
Processing Record 16 of Set 4 | emba
Processing Record 17 of Set 4 | angamali
Processing Record 18 of Set 4 | bonthe
Processing Record 19 of Set 4 | naze
Processing Record 20 of Set 4 | garm
City not found. Skipping...
Processing Record 21 of Set 4 | ribeira grande
Processing Record 22 of Set 4 | meulaboh
Processing Record 23 of Set 4 | lago da pedra
Processing Record 24 of Set 4 | barawe
City not found. Skipping...
Processing Record 25 of Set 4 | shimoda
Processing Record 26 of Set 4 | dubai
Processing Record 27 of Set 4 | grand river south east
City not found. Skipping...
Processing Record 28 of Set 4 | hambantota
Processing Record 29 of Set 4 | tigil
Processing Record 30 of Set 4 | taoudenni
Processing Record 31 of Set 4 | pevek
Processing Record 32 of Set 4 | rio gallegos
Processing Record 33 of Set 4 | tombouctou
Processing Record 34 of Set 4 | talaya
Processing Record 35 of Set 4 | manzil tamim
###Markdown
Convert Raw Data to DataFrame- Export the city data into a .csv.- Display the DataFrame
###Code
# Convert array of JSONs into Pandas DataFrame
cities_weather_df = pd.DataFrame(cities_weather)
# Show Record Count
cities_weather_df.count()
# Display the City Data Frame
cities_weather_df
###Output
_____no_output_____
###Markdown
Inspect the data and remove the cities where the humidity > 100%.- Skip this step if there are no cities that have humidity > 100%.
###Code
humidity_outliers = cities_weather_df.loc[cities_weather_df['Humidity'] > 100]
humidity_outliers
# Get the indices of cities that have humidity over 100%.
indices = cities_weather_df.index[cities_weather_df['Humidity']>100].tolist()
indices
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
clean_city_data = cities_weather_df.drop(indices, inplace=False)
clean_city_data
# Extract relevant fields from the data frame
# Export the City_Data into a csv
clean_city_data.to_csv('cities.csv', index=True, header=True)
###Output
_____no_output_____
###Markdown
Latitude vs. Temperature Plot
###Code
# Build scatter plot for latitude vs. temperature
x_values = cities_weather_df['Lat']
y_values = cities_weather_df['Max Temp']
plt.scatter(x_values,y_values, edgecolors='k')
# Incorporate the other graph properties
plt.xlabel('Latitude')
plt.ylabel('Max Temperature (F)')
today = pd.to_datetime("today")
plt.title(f'City Latitude vs. Max Temperature ({today})')
plt.grid()
# Save the figure
plt.savefig('output_data/Fig1.png')
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
Latitude vs. Humidity Plot
###Code
# Build the scatter plots for latitude vs. humidity
x_values = cities_weather_df['Lat']
y_values = cities_weather_df['Humidity']
plt.scatter(x_values,y_values, edgecolors='k')
# Incorporate the other graph properties
plt.xlabel('Latitude')
plt.ylabel('Humidity(%)')
today = pd.to_datetime("today")
plt.title(f'City Latitude vs. Humidity ({today})')
plt.grid()
# Save the figure
plt.savefig('output_data/Fig2.png')
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
Latitude vs. Cloudiness Plot
###Code
# Build the scatter plots for latitude vs. humidity
x_values = cities_weather_df['Lat']
y_values = cities_weather_df['Cloudiness']
plt.scatter(x_values,y_values, edgecolors='k')
# Incorporate the other graph properties
plt.xlabel('Latitude')
plt.ylabel('Cloudiness(%)')
today = pd.to_datetime("today")
plt.title(f'City Latitude vs. Cloudiness ({today})')
plt.grid()
# Save the figure
plt.savefig('output_data/Fig3.png')
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
Latitude vs. Wind Speed Plot
###Code
# Build the scatter plots for latitude vs. humidity
x_values = cities_weather_df['Lat']
y_values = cities_weather_df['Wind Speed']
plt.scatter(x_values,y_values, edgecolors='k')
# Incorporate the other graph properties
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
today = pd.to_datetime("today")
plt.title(f'City Latitude vs. Wind Speed ({today})')
plt.grid()
# Save the figure
plt.savefig('output_data/Fig4.png')
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
# Create a function to create Linear Regression plots
def line_regression(x_values, y_values):
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept # make predictions
return Y_pred, r_value
# Create Northern and Southern Hemisphere DataFrames
North_cities_weather_df = cities_weather_df.loc[cities_weather_df['Lat'] > 0]
South_cities_weather_df = cities_weather_df.loc[cities_weather_df['Lat'] < 0]
###Output
_____no_output_____
###Markdown
Max Temp vs. Latitude Linear Regression
###Code
# Linear regression on Northern Hemisphere
x_values = North_cities_weather_df['Lat']
y_values = North_cities_weather_df['Max Temp']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept # make predictions
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (1,1),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Max Temp')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('output_data/Fig5.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
# Linear regression on Southern Hemisphere
x_values = South_cities_weather_df['Lat']
y_values = South_cities_weather_df['Max Temp']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept # make predictions
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (-25,40),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Max Temp')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('../output_data/Fig6.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
###Output
The r_value is: 0.7099355624370003
###Markdown
Humidity (%) vs. Latitude Linear Regression
###Code
# Northern Hemisphere
x_values = North_cities_weather_df['Lat']
y_values = North_cities_weather_df['Humidity']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (40,10),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Humidity')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('output_data/Fig7.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
# Southern Hemisphere
x_values = South_cities_weather_df['Lat']
y_values = South_cities_weather_df['Humidity']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept # make predictions
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (-50,20),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Humidity')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('../output_data/Fig8.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
###Output
The r_value is: 0.07771732883349165
###Markdown
Cloudiness (%) vs. Latitude Linear Regression
###Code
# Northern Hemisphere
x_values = North_cities_weather_df['Lat']
y_values = North_cities_weather_df['Cloudiness']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (40,10),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Cloudiness')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('output_data/Fig9.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
# Southern Hemisphere
x_values = South_cities_weather_df['Lat']
y_values = South_cities_weather_df['Cloudiness']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (-30,30),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Cloudiness')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('../output_data/Fig10.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
###Output
The r_value is: 0.29161126424814265
###Markdown
Wind Speed (mph) vs. Latitude Linear Regression
###Code
# Northern Hemisphere
x_values = North_cities_weather_df['Lat']
y_values = North_cities_weather_df['Wind Speed']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (40,25),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Wind Speed')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('output_data/Fig11.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
# Southern Hemisphere
x_values = South_cities_weather_df['Lat']
y_values = South_cities_weather_df['Wind Speed']
plt.scatter(x_values,y_values)
slope, intercept, r_value, p_value, std_err = linregress(x_values,y_values)
Y_pred = slope*x_values + intercept
plt.plot(x_values, Y_pred, color='r')
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.annotate(line_eq, (-50,20),fontsize=15,color="red")
plt.xlabel('Latitude')
plt.ylabel('Wind Speed')
today = pd.to_datetime("today")
# Save the figure
plt.savefig('output_data/Fig12.png')
# Show plot and r value
print(f'The r_value is: {r_value}')
plt.show()
###Output
The r_value is: -0.2550202218663272
|
COMP534-AppliedArtificialIntelligence/COMP534-CA3.ipynb | ###Markdown
COMP 534 - Applied Artificial Intelligence CA3 - Image Classification with Convolutional Neural NetworksThis notebook was produced as a deliverable for a group project for the above module, as part of the 2021-2022 Data Science and Artificial Intelligence MSc course at the University of Liverpool.It comprises a performance analysis of two pre-trained CNNs. A new model is proposed to improve performance. The analysis is based on a multiclass classification problem using a dataset consisting of X-ray images of patients who are either healthy, diagnosed with covid-19, or diagnosed with pneumonia. PreparationImport required libraries
###Code
# file handling
import os
import shutil
# matrix manipulation and data handling
import numpy as np
import pandas as pd
# creating plots
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
import seaborn as sns
# monitoring training progress
from time import perf_counter
# pytorch library and utilities
import torch
from torch import nn
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torchvision import transforms
# set random seeds for reproducibility
np.random.seed(123)
torch.manual_seed(321)
# sklearn evaluation utilities
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
# use colab gpu if enabled via runtime menu
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
device
###Output
_____no_output_____
###Markdown
Load datasetDataset downloadable from kaggle here: https://www.kaggle.com/datasets/prashant268/chest-xray-covid19-pneumonia - last accessed on 06/05/22A cleaned version of the dataset was produced and can be found here: https://www.kaggle.com/datasets/yazanqasem/assignment - last accessed on 09/05/22Load directly from kaggle to notebook. Note, for this to run you need to have your own kaggle API access token stored in the root directory of your Google Drive
###Code
# mount googledrive
from google.colab import drive
drive.mount('/content/drive')
!pip install kaggle > /dev/null
!mkdir ~/.kaggle
# copy kaggle access token from googledrive
!cp /content/drive/MyDrive/kaggle.json ~/.kaggle/kaggle.json
!chmod 600 ~/.kaggle/kaggle.json
# download and unzip dataset
!kaggle datasets download yazanqasem/assignment
!unzip /content/assignment.zip > /dev/null
###Output
Mounted at /content/drive
Downloading assignment.zip to /content
100% 2.01G/2.02G [00:54<00:00, 47.0MB/s]
100% 2.02G/2.02G [00:54<00:00, 39.5MB/s]
###Markdown
Create validation splitA validation split will be taken from the training data to monitor training performance. Due to the class imbalance, the split will be stratified to maintain the same imbalance across the validation set. The train and test sets already have a constant class imbalance.
###Code
# define proportion of training data to be used in validation split
validSplit = 0.2
# create folders for validation data
!mkdir /content/Data/valid
!mkdir /content/Data/valid/COVID19/
!mkdir /content/Data/valid/NORMAL/
!mkdir /content/Data/valid/PNEUMONIA/
# create variables for folder path of each data split
trainFolder = '/content/Data/train/'
validFolder = '/content/Data/valid/'
testFolder = '/content/Data/test/'
# loop over class folders in training folder
for classFolder in os.listdir(trainFolder):
# count number of images
imgCount = len(os.listdir(trainFolder+classFolder))
# create 1d array where each element corresponds to the index of image in current folder
imgIdxArray = np.array(range(imgCount))
# randomly select validation split proportion of indices
selectIdxArray = np.random.choice(imgIdxArray, int(imgCount*validSplit), replace=False)
# loop over images in folder
for i, imgName in enumerate(os.listdir(trainFolder+classFolder)):
# if image index has been selected, move to validation folder
if i in selectIdxArray:
source = trainFolder+classFolder+'/'+imgName
destination = validFolder+classFolder+'/'+imgName
shutil.move(source, destination)
###Output
_____no_output_____
###Markdown
Confirm contents of each folder with added validation split
###Code
dir = '/content/Data'
# loop over folders
for folderSplit in os.listdir(dir):
# initialise dict to count each class within splits
folderDict = {}
# variable to count total images in current split
splitCount = 0
filePath1 = dir +'/'+ folderSplit
# loop over next level folders
for folderClass in os.listdir(filePath1):
filePath2 = filePath1 +'/'+ folderClass
# count images current folder
count = len(os.listdir(filePath2))
# increment split count
splitCount += count
# store class count
folderDict[folderClass] = count
# print summary of counts
print(f'{folderSplit} split contains {splitCount} images')
for item in folderDict.items():
print(f'{item[0]} class: {(100 * item[1] / splitCount):.1f}%')
print('')
###Output
test split contains 1269 images
NORMAL class: 25.0%
COVID19 class: 8.9%
PNEUMONIA class: 66.1%
valid split contains 1008 images
NORMAL class: 25.0%
COVID19 class: 8.4%
PNEUMONIA class: 66.6%
train split contains 4040 images
NORMAL class: 25.0%
COVID19 class: 8.5%
PNEUMONIA class: 66.5%
###Markdown
There is a class imbalance across the dataset. The imbalance is constant across each split. A class weights vector is defined in order to account for this imbalance.
###Code
# calculate class weights vector using validation split counts (imbalance is constant across splits)
classWeights = torch.Tensor([[(splitCount / (len(folderDict) * classCount)) for classCount in folderDict.values()]])
classWeights = nn.functional.normalize(classWeights)
classWeights = classWeights.reshape(3)
classWeights = classWeights.to(device)
###Output
_____no_output_____
###Markdown
Pre-processingAll pytorch pretrained models expect the same model input. A series of transforms will be applied to ensure the input tensors are in the correct form.
###Code
# create transforms for use with pretrained models
transformsPretrained = transforms.Compose([
# pretrained models require 3 channel input (R=G=B in this case)
transforms.Grayscale(num_output_channels=3),
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
###Output
_____no_output_____
###Markdown
Create dataloadersPytorch dataloaders provide an effective method to feed data
###Code
# define batch size
batchSize = 64
# create image folder objects for each split and apply preprocessing transforms
trainData = ImageFolder(
root=trainFolder,
transform=transformsPretrained
)
validData = ImageFolder(
root=validFolder,
transform=transformsPretrained
)
testData = ImageFolder(
root=testFolder,
transform=transformsPretrained
)
# create data loader objects for each split, only shuffle training split
trainDataLoader = DataLoader(
trainData,
batch_size=batchSize,
shuffle=True
)
validDataLoader = DataLoader(
validData,
batch_size=batchSize,
shuffle=False
)
testDataLoader = DataLoader(
testData,
batch_size=batchSize,
shuffle=False
)
# store class label mapping
labelMap = trainData.class_to_idx
###Output
_____no_output_____
###Markdown
TrainingTo set up the training process a number of parameters are defined.
###Code
# store count of number of classes
numClasses = len(labelMap)
# calculate steps per epoch for training and validation splits
trainStepsEpoch = (len(trainData) // batchSize) + 1
validStepsEpoch = (len(validData) // batchSize) + 1
# define learning rate
lr = 0.001
# define loss function
lossFunction = nn.CrossEntropyLoss(weight=classWeights)
###Output
_____no_output_____
###Markdown
A training function is defined for use with the pretrained models.
###Code
def trainer(model, epochs):
# dictionary to store training evaluation
history = {
'trainLoss': [],
'validLoss': [],
'trainAccuracy': [],
'validAccuracy': []
}
# iterate over training epochs
for epoch in range(epochs):
# start timer for progress monitoring
startTime = perf_counter()
# put model into training mode
model.train()
# initialise epoch evaluation metrics
epochTrainLoss = 0
epochValidLoss = 0
epochTrainCorrect = 0
epochValidCorrect = 0
# iterate over train data batches
for idx, (X, label) in enumerate(trainDataLoader):
# send to gpu if available
(X, label) = (X.to(device), label.to(device))
# batch training cycle
optimiser.zero_grad()
prediction = model(X)
loss = lossFunction(prediction, label)
loss.backward()
optimiser.step()
# increment current epoch loss and count of correct classifications
epochTrainLoss += loss
epochTrainCorrect += (prediction.argmax(1) == label).type(torch.float).sum().item()
# for code development purposes only
# if idx == 0:
# break
# validation
# disable gradient calculation
with torch.no_grad():
# put model in evaluation mode
model.eval()
# loop over validation data batches
for idx, (X, label) in enumerate(validDataLoader):
# send to gpu if available
(X, label) = (X.to(device), label.to(device))
# make the predictions
prediction = model(X)
# increment current epoch loss and count of correct classifications
epochValidLoss += lossFunction(prediction, label)
epochValidCorrect += (prediction.argmax(1) == label).type(torch.float).sum().item()
# for code development purposes only
# if idx == 0:
# break
# calculate mean training and validation losses for epoch
meanEpochTrainLoss = epochTrainLoss / trainStepsEpoch
meanEpochValidLoss = epochValidLoss / validStepsEpoch
# calculate training and validation accuracies for epoch
trainEpochAccuracy = 100 * epochTrainCorrect / len(trainData)
validEpochAccuracy = 100 * epochValidCorrect / len(validData)
# store training history
history['trainLoss'].append(meanEpochTrainLoss.cpu().detach().numpy().item())
history['validLoss'].append(meanEpochValidLoss.cpu().detach().numpy().item())
history['trainAccuracy'].append(trainEpochAccuracy)
history['validAccuracy'].append(validEpochAccuracy)
# end epoch timer
endTime = perf_counter()
# print summary of epoch
print(f'Epoch: {epoch+1}. Time taken: {int((endTime-startTime)//60)}mins{int((endTime-startTime)%60)}s')
print(f'Training loss: {meanEpochTrainLoss:.2f} --- Validation loss: {meanEpochValidLoss:.2f} --- Training accuracy: {trainEpochAccuracy:.2f}% --- Validation accuracy: {validEpochAccuracy:.2f}%')
return history
###Output
_____no_output_____
###Markdown
TestingA tester function is defined in order to evaluate model performance.
###Code
def tester(model):
# initialise array to hold results
resultsArray = np.zeros((len(testData), 2))
# disable gradient calculation
with torch.no_grad():
# put model in evaluation mode
model.eval()
# loop over the validation data batches
for idx, (X, label) in enumerate(testDataLoader):
# send input to gpu if available
(X, label) = (X.to(device), label.to(device))
# make predictions on inputs
prediction = model(X)
# store batch predictions and true labels in numpy array
predictionNp = prediction.argmax(1).cpu().detach().numpy()
labelNp = label.cpu().detach().numpy()
# get start and end of current batch indices
start = idx * batchSize
end = (idx+1) * batchSize
# store in results array
resultsArray[start:end, 0] = labelNp.copy()
resultsArray[start:end, 1] = predictionNp.copy()
# for code development purposes only
# if idx == 3:
# break
return resultsArray
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# initialise dataframe to store metrics for each model
evaluationDf = pd.DataFrame(columns=['Model', 'Accuracy', 'Precision', 'Recall', 'F-Score'])
# function to add compute metrics from results
def evaluationMetrics(modelName, resultsArray):
return {
'Model': modelName,
'Accuracy': accuracy_score(resultsArray[:,0], resultsArray[:,1]),
'Precision': precision_score(resultsArray[:,0], resultsArray[:,1], average='weighted'),
'Recall': recall_score(resultsArray[:,0], resultsArray[:,1], average='weighted'),
'F-Score': f1_score(resultsArray[:,0], resultsArray[:,1], average='weighted')
}
###Output
_____no_output_____
###Markdown
VGG11 pretrained model
###Code
# load pretrained vgg11 model
vgg11 = torch.hub.load('pytorch/vision:v0.10.0', 'vgg11', pretrained=True)
# set all parameters to be fixed
for parameter in vgg11.parameters():
parameter.requires_grad = False
# modify last layer to have required number of outputs
vgg11.classifier[-1] = nn.Linear(vgg11.classifier[-1].in_features, numClasses)
# send model to gpu if available
vgg11 = vgg11.to(device)
# initialise optimiser for training the output layer weights only
optimiser = torch.optim.Adam(vgg11.classifier[-1].parameters(), lr=lr)
###Output
Downloading: "https://github.com/pytorch/vision/archive/v0.10.0.zip" to /root/.cache/torch/hub/v0.10.0.zip
Downloading: "https://download.pytorch.org/models/vgg11-8a719046.pth" to /root/.cache/torch/hub/checkpoints/vgg11-8a719046.pth
###Markdown
Training
###Code
# train model and store training history
vgg11History = trainer(vgg11, 10)
# create dataframe from results dictionary
df = pd.DataFrame(vgg11History)
df = df.reset_index()
df = df.rename(columns={'index':'Epoch'})
# change results to a long form for plotting
dfLoss = df.rename(columns={'trainLoss':'Training', 'validLoss':'Validation'})
dfLoss = pd.melt(
dfLoss,
id_vars=['Epoch'],
value_vars=['Training', 'Validation'],
var_name='Split',
value_name='Loss',)
dfAcc = df.rename(columns={'trainAccuracy':'Training', 'validAccuracy':'Validation'})
dfAcc = pd.melt(dfAcc,
id_vars=['Epoch'],
value_vars=['Training', 'Validation'],
var_name='Split',
value_name='Accuracy (%)',)
dfLoss = dfLoss.set_index(['Split', 'Epoch'])
dfLong = dfAcc.copy()
# join loss and accuracy long form dataframes back together
dfLong = dfLong.join(dfLoss, on=['Split', 'Epoch'])
# create training performance figure
fig, axs = plt.subplots(1, 2, figsize=(8, 4))
# plot loss
sns.lineplot(ax=axs[0], data=dfLong, x='Epoch', y='Loss', hue='Split')
axs[0].yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
# plot accuracy
sns.lineplot(ax=axs[1], data=dfLong, x='Epoch', y='Accuracy (%)', hue='Split')
axs[1].yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
axs[1].get_legend().remove()
fig.tight_layout()
# add title
fig.suptitle('VGG11 training', y=1.05, fontsize=12)
fig.savefig('vgg11Training', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Testing
###Code
# test model and store results
vgg11Results = tester(vgg11)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# get class labels for confusion matrix
labelList = [key for key in labelMap.keys()]
labelList
# create figure for confusion matrix
fig, ax = plt.subplots(figsize=(4, 4))
# create confusion matrix display
vgg11CM = ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix(vgg11Results[:,0],vgg11Results[:,1]),
display_labels=labelList)
# plot confusion matrix
vgg11CM.plot(ax=ax,
colorbar=False,
cmap='cividis')
# add title
fig.suptitle('VGG11 Confusion Matrix', fontsize=12)
fig.savefig('vgg11ConfMat', bbox_inches='tight')
plt.show()
# compute and add results to dataframe
evaluationDf = evaluationDf.append(evaluationMetrics('VGG11', vgg11Results), ignore_index=True)
evaluationDf
###Output
_____no_output_____
###Markdown
GoogleNet pretrained model
###Code
# load pretrained GoogleNet model
googleNet = torch.hub.load('pytorch/vision:v0.10.0', 'googlenet', pretrained=True)
# set all parameters to be fixed
for parameter in googleNet.parameters():
parameter.requires_grad = False
# modify last layer to have required number of outputs
googleNet.fc = nn.Linear(googleNet.fc.in_features, numClasses)
# send model to gpu if available
googleNet = googleNet.to(device)
# initialise optimiser for training the output layer weights only
optimiser = torch.optim.Adam(googleNet.fc.parameters(), lr=lr)
###Output
Using cache found in /root/.cache/torch/hub/pytorch_vision_v0.10.0
Downloading: "https://download.pytorch.org/models/googlenet-1378be20.pth" to /root/.cache/torch/hub/checkpoints/googlenet-1378be20.pth
###Markdown
Training
###Code
# train model and store training history
googleNetHistory = trainer(googleNet, 10)
# create dataframe from results dictionary
df = pd.DataFrame(googleNetHistory)
df = df.reset_index()
df = df.rename(columns={'index':'Epoch'})
# change results to a long form for plotting
dfLoss = df.rename(columns={'trainLoss':'Training', 'validLoss':'Validation'})
dfLoss = pd.melt(
dfLoss,
id_vars=['Epoch'],
value_vars=['Training', 'Validation'],
var_name='Split',
value_name='Loss',)
dfAcc = df.rename(columns={'trainAccuracy':'Training', 'validAccuracy':'Validation'})
dfAcc = pd.melt(dfAcc,
id_vars=['Epoch'],
value_vars=['Training', 'Validation'],
var_name='Split',
value_name='Accuracy (%)',)
dfLoss = dfLoss.set_index(['Split', 'Epoch'])
dfLong = dfAcc.copy()
# join loss and accuracy long form dataframes back together
dfLong = dfLong.join(dfLoss, on=['Split', 'Epoch'])
# create training performance figure
fig, axs = plt.subplots(1, 2, figsize=(8, 4))
# plot loss
sns.lineplot(ax=axs[0], data=dfLong, x='Epoch', y='Loss', hue='Split')
# plot accuracy
sns.lineplot(ax=axs[1], data=dfLong, x='Epoch', y='Accuracy (%)', hue='Split')
axs[1].get_legend().remove()
fig.tight_layout()
# add title
fig.suptitle('GoogleNet training', y=1.05, fontsize=12)
fig.savefig('googleNetTraining', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Testing
###Code
# test model and store results
googleNetResults = tester(googleNet)
###Output
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:780: UserWarning: Note that order of the arguments: ceil_mode and return_indices will changeto match the args list in nn.MaxPool2d in a future release.
warnings.warn("Note that order of the arguments: ceil_mode and return_indices will change"
###Markdown
Evaluation
###Code
# check class labels for confusion matrix
labelList
# create figure for confusion matrix
fig, ax = plt.subplots(figsize=(4, 4))
# create confusion matrix display
gNetCM = ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix(googleNetResults[:,0],googleNetResults[:,1]),
display_labels=labelList)
# plot confusion matrix
gNetCM.plot(ax=ax,
colorbar=False,
cmap='cividis')
# add title
fig.suptitle('GoogleNet Confusion Matrix', fontsize=12)
fig.savefig('googleNetConfMat', bbox_inches='tight')
plt.show()
# compute and add results to dataframe
evaluationDf = evaluationDf.append(evaluationMetrics('GoogleNet', googleNetResults), ignore_index=True)
evaluationDf
###Output
_____no_output_____
###Markdown
Improved model
###Code
# load pretrained googleNet model
gnMod = torch.hub.load('pytorch/vision:v0.10.0', 'googlenet', pretrained=True)
###Output
Using cache found in /root/.cache/torch/hub/pytorch_vision_v0.10.0
###Markdown
GoogleNet has only a single fully connected layer after the convolutional layers. We will add a new classifier to the end of the model consisting of a series of fully connected layers of decreasing size.
###Code
# define a new classifier
gnModClassifier = nn.Sequential(
# add fully connected layers of decreasing size, ReLU activation function and 20% dropout
nn.Linear(gnMod.fc.in_features, 512),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 128),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(128, 64),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(64, 16),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(16, 3),
)
# replace GoogleNet classifier with new structure
gnMod.fc = gnModClassifier
# iterate over model modules and parameters
for module, param in zip(gnMod.modules(), gnMod.parameters()):
# check if module is batch normalisation type
if isinstance(module, nn.BatchNorm2d):
# fix parameters
param.requires_grad = False
# send model to gpu if available
gnMod = gnMod.to(device)
# initialise optimiser for training
optimiser = torch.optim.Adam(gnMod.parameters(), lr=lr)
###Output
_____no_output_____
###Markdown
Training
###Code
# train model and store training history
gnModHistory = trainer(gnMod, 8)
# create dataframe from results dictionary
df = pd.DataFrame(gnModHistory)
df = df.reset_index()
df = df.rename(columns={'index':'Epoch'})
# change results to a long form for plotting
dfLoss = df.rename(columns={'trainLoss':'Training', 'validLoss':'Validation'})
dfLoss = pd.melt(
dfLoss,
id_vars=['Epoch'],
value_vars=['Training', 'Validation'],
var_name='Split',
value_name='Loss',)
dfAcc = df.rename(columns={'trainAccuracy':'Training', 'validAccuracy':'Validation'})
dfAcc = pd.melt(dfAcc,
id_vars=['Epoch'],
value_vars=['Training', 'Validation'],
var_name='Split',
value_name='Accuracy (%)',)
dfLoss = dfLoss.set_index(['Split', 'Epoch'])
dfLong = dfAcc.copy()
# join loss and accuracy long form dataframes back together
dfLong = dfLong.join(dfLoss, on=['Split', 'Epoch'])
# create training performance figure
fig, axs = plt.subplots(1, 2, figsize=(8, 4))
# plot loss
sns.lineplot(ax=axs[0], data=dfLong, x='Epoch', y='Loss', hue='Split')
axs[0].yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
# plot accuracy
sns.lineplot(ax=axs[1], data=dfLong, x='Epoch', y='Accuracy (%)', hue='Split')
axs[1].yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
axs[1].get_legend().remove()
fig.tight_layout()
# add title
fig.suptitle('GoogleNet-Modified training', y=1.05, fontsize=12)
fig.savefig('gnModTraining8Epochs', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Testing
###Code
# test model and store results
gnModResults = tester(gnMod)
###Output
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:780: UserWarning: Note that order of the arguments: ceil_mode and return_indices will changeto match the args list in nn.MaxPool2d in a future release.
warnings.warn("Note that order of the arguments: ceil_mode and return_indices will change"
###Markdown
Evaluation
###Code
# check class labels for confusion matrix
labelList
# create figure for confusion matrix
fig, ax = plt.subplots(figsize=(4, 4))
# create confusion matrix display
gnModCM = ConfusionMatrixDisplay(
confusion_matrix=confusion_matrix(gnModResults[:,0], gnModResults[:,1]),
display_labels=labelList)
# plot confusion matrix
gnModCM.plot(ax=ax,
colorbar=False,
cmap='cividis')
# add title
fig.suptitle('GoogleNet-Modified Confusion Matrix', fontsize=12)
fig.savefig('gnModConfMat8Epochs', bbox_inches='tight')
plt.show()
evaluationDf = evaluationDf.append(evaluationMetrics('GoogleNet-Modified', gnModResults), ignore_index=True)
evaluationDf
###Output
_____no_output_____ |
dataframe_ds/dataframe_ds.ipynb | ###Markdown
.iloc and .loc are used for row selection
###Code
# a single column projection is a Series object
type(df['Name'])
# select all of the rows which related to 'sch1' using '.loc'
df.loc['sch1']['Name']
print(type(df.loc['sch1'])) # schould be a DataFrame
print(type(df.loc['sch1']['Name'])) # should be a Series
# The '.loc' attribute also supports slicing
# all the names and scores for all schools using the '.loc' operator
df.loc[:, ['Name', 'Score']]
# drop function doesn't change the DataFrame by default
df.drop('sch1')
df
# copy the dataframe
df_cpy = df.copy()
# to update the datafram instead of a copy being returned, use 'inplace=True'
# set 'axis=1' to know its a column
df_cpy.drop('Name', inplace=True, axis=1)
df_cpy
# a second way to drop a column
del df_cpy['Class']
df_cpy
# adding column to the dataframe
df['ClassRanking'] = None
df
###Output
_____no_output_____ |
Part 02A - Additional Feature Creation.ipynb | ###Markdown
Notebook adding Additional Features
###Code
#Import Libraries
import pandas as pd
import numpy as np
from pandas.tseries.offsets import MonthEnd
csv_data = pd.read_csv('SP500_Index_Data_partial_clean.csv')
csv_data['Date'] = pd.to_datetime(csv_data['Date'], format="%Y-%m-%d") #Date column to datetime
csv_data = csv_data.sort_values(by='Date', ascending=True).reset_index(drop=True) #dates sorted to better visualize output
csv_data.drop(['USD_FF_mktcap'],axis=1,inplace=True) #remove unwanted columns
csv_data.rename(columns={'Local_Returns_12m': 'Price_Returns_12m', 'Local_Returns_1m': 'Price_Returns_1m'}, inplace=True) #rename certain columns
export_df = csv_data.copy() #copy of starting data, used later when we export the new features
csv_data
###Output
_____no_output_____
###Markdown
Imputation of Trailing 1 Year EPS Growth (Trail1yr_EPSgro)
###Code
df = csv_data.copy()
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need, date ticker and trailing 1 yr EPS
df = df[['Date', 'Ticker','Trail_EPS']]
df = df.sort_values(by='Date', ascending=True) #dates sorted to better visualize output
df['Date'] = pd.to_datetime(df['Date'], format="%Y-%m-%d") #Date column to datetime
### This pandas method provides the monthly date we want from 12 months ago in a new column
df['Date_1year_Ago'] = df['Date'] + MonthEnd(-12)
### Temp df on which we'll 'reverse merge' the year prior EPS data
### this merge 'tricks' pandas to give us the year ago EPS
temp_df = df.copy()[['Date_1year_Ago', 'Ticker']] #copy of df with just the columns we need
temp_df.columns = ['Date', 'Ticker'] #change the column names to match df
temp_df = temp_df.merge(df[['Date','Ticker','Trail_EPS']], how='left', on=['Date', 'Ticker']) #this merge 'tricks' pandas to give us the year ago EPS
temp_df['Date_1year_Ago'] = temp_df['Date'] #flip column names to continue 'tricking' pandas
temp_df['Date'] = temp_df['Date_1year_Ago'] + MonthEnd(12) #make our 'Date' one year hence to continue the 'trick'
temp_df.columns = ['Date', 'Ticker', 'Trail_EPS_1year_Ago', 'Date_1year_Ago'] #rename the temp_df columns so we merge with the existing data properly
temp_df = temp_df[['Date', 'Ticker', 'Date_1year_Ago','Trail_EPS_1year_Ago']] #re-orders the columns
### apply the 'trick', merging df and temp_df with 1 year ago EPS
df = df.merge(temp_df, how='left', on=['Date', 'Ticker', 'Date_1year_Ago'])
del temp_df #no longer need temp_df
### calculate the growth rate
### Note that we multiply by 100 to match the formatting of our other growth rates (i.e. a '5' is 5%, a 0.05 is 0.05%)
df['Trail1yr_EPSgro'] = 100* ( df['Trail_EPS'] / df['Trail_EPS_1year_Ago'] -1 )
### If both prior and current EPS is zero (which seems unlikely, but is possible), correct the calculation to reflect 0% instead of NaN
df['Trail1yr_EPSgro'] = df.apply(lambda x: 0 if ((x['Trail_EPS']==0) & (x['Trail_EPS_1year_Ago']==0)) else
x['Trail1yr_EPSgro'],
axis=1)
### Get rid of inf and -inf values
df.replace([np.inf, -np.inf], np.nan, inplace=True)
### Check the output
display(df.describe())
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Trail1yr_EPSgro']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
export_df.drop(['Trail_EPS'], axis=1, inplace=True) #drop the Trail_EPS column as we no longer need it, nor is it a useable feature
#export_df
###Output
_____no_output_____
###Markdown
Imputation of Trailing 1 Year Dividend Per Share Growth (Trail1yr_DPSgro)
###Code
df = csv_data.copy()
### Impute the Trailing dividend per share or Trail_DPS field
### This is done by taking the dividend yield * the stock price
df['Trail_DPS'] = (df['Trail_DivYld'] / 100) * df['Price'] #dividend yield * the stock price is the imputed trailing DPS
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need, date ticker and trailing 1 yr DPS
df = df[['Date', 'Ticker','Trail_DPS']]
df = df.sort_values(by='Date', ascending=True) #dates sorted to better visualize output
df['Date'] = pd.to_datetime(df['Date'], format="%Y-%m-%d") #Date column to datetime
### This pandas method provides the monthly date we want from 12 months ago in a new column
df['Date_1year_Ago'] = df['Date'] + MonthEnd(-12)
### Temp df on which we'll 'reverse merge' the year prior DPS data
### this merge 'tricks' pandas to give us the year ago DPS
temp_df = df.copy()[['Date_1year_Ago', 'Ticker']] #copy of df with just the columns we need
temp_df.columns = ['Date', 'Ticker'] #change the column names to match df
temp_df = temp_df.merge(df[['Date','Ticker','Trail_DPS']], how='left', on=['Date', 'Ticker']) #this merge 'tricks' pandas to give us the year ago EPS
temp_df['Date_1year_Ago'] = temp_df['Date'] #flip column names to continue 'tricking' pandas
temp_df['Date'] = temp_df['Date_1year_Ago'] + MonthEnd(12) #make our 'Date' one year hence to continue the 'trick'
temp_df.columns = ['Date', 'Ticker', 'Trail_DPS_1year_Ago', 'Date_1year_Ago'] #rename the temp_df columns so we merge with the existing data properly
temp_df = temp_df[['Date', 'Ticker', 'Date_1year_Ago','Trail_DPS_1year_Ago']] #re-orders the columns
#display(temp_df)
### apply the 'trick', merging df and temp_df with 1 year ago EPS
df = df.merge(temp_df, how='left', on=['Date', 'Ticker', 'Date_1year_Ago'])
del temp_df #no longer need temp_df
### calculate the growth rate
### Note that we multiply by 100 to match the formatting of our other growth rates (i.e. a '5' is 5%, a 0.05 is 0.05%)
df['Trail1yr_DPSgro'] = 100* ( df['Trail_DPS'] / df['Trail_DPS_1year_Ago'] -1 )
### If both prior and current dividend is zero, correct the calculation to reflect 0% instead of NaN
df['Trail1yr_DPSgro'] = df.apply(lambda x: 0 if ((x['Trail_DPS']==0) & (x['Trail_DPS_1year_Ago']==0)) else
x['Trail1yr_DPSgro'],
axis=1)
### Get rid of inf and -inf values
df.replace([np.inf, -np.inf], np.nan, inplace=True)
### Check the output
display(df.describe())
#display(df.head(24))
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Trail1yr_DPSgro']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
#export_df
###Output
_____no_output_____
###Markdown
Imputation of Trailing 12 month Price Performance using 1 month Performance (To Test our Method -- Matches to our existing 12m performance data)
###Code
# df = csv_data.copy()
# df = df[df.Ticker == 'IBM'] #Sanity check using one stock
# ### Limit to just the columns we need
# df = df[['Date', 'Ticker','Price_Returns_1m']]
# #display(df)
# ### Shift by __ months and calculate performance consistent with the existing performance columns
# ### Need the '+1' for 'chaining' the performance together for an accurate performance calc.
# df['1m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(0) +1
# df['2m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(1) +1
# df['3m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(2) +1
# df['4m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(3) +1
# df['5m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(4) +1
# df['6m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(5) +1
# df['7m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(6) +1
# df['8m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(7) +1
# df['9m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(8) +1
# df['10m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(9) +1
# df['11m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(10) +1
# df['12m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(11) +1
# ### Perform the chained performance calc
# df['Price_Returns_12m_ALT'] = ( df['1m']*df['2m']*df['3m']*df['4m']*df['5m']*df['6m']*df['7m']*df['8m']*df['9m']*df['10m']*df['11m']*df['12m'] )-1
# ### Check the output
# display(df.describe())
# #display(df.head(24))
# display(df.tail(24))
###Output
_____no_output_____
###Markdown
Imputation of Trailing 3 month Price Performance
###Code
df = csv_data.copy()
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need
df = df[['Date', 'Ticker','Price_Returns_1m']]
#display(df)
### Shift by __ months and calculate performance consistent with the existing performance columns
### Need the '+1' for 'chaining' the performance together for an accurate performance calc.
df['1m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(0) +1
df['2m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(1) +1
df['3m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(2) +1
# df['4m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(3) +1
# df['5m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(4) +1
# df['6m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(5) +1
# df['7m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(6) +1
# df['8m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(7) +1
# df['9m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(8) +1
# df['10m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(9) +1
# df['11m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(10) +1
# df['12m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(11) +1
### Perform the chained performance calc
df['Price_Returns_3m'] = ( df['1m']*df['2m']*df['3m'] )-1 #*df['4m']*df['5m']*df['6m']*df['7m']*df['8m']*df['9m']*df['10m']*df['11m']*df['12m'] )-1
### Check the output
display(df.describe())
#display(df.head(24))
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Price_Returns_3m']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
#export_df
###Output
_____no_output_____
###Markdown
Imputation of Trailing 6 month Price Performance
###Code
df = csv_data.copy()
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need
df = df[['Date', 'Ticker','Price_Returns_1m']]
#display(df)
### Shift by __ months and calculate performance consistent with the existing performance columns
### Need the '+1' for 'chaining' the performance together for an accurate performance calc.
df['1m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(0) +1
df['2m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(1) +1
df['3m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(2) +1
df['4m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(3) +1
df['5m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(4) +1
df['6m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(5) +1
# df['7m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(6) +1
# df['8m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(7) +1
# df['9m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(8) +1
# df['10m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(9) +1
# df['11m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(10) +1
# df['12m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(11) +1
### Perform the chained performance calc
df['Price_Returns_6m'] = ( df['1m']*df['2m']*df['3m']*df['4m']*df['5m']*df['6m'] )-1 #*df['7m']*df['8m']*df['9m']*df['10m']*df['11m']*df['12m'] )-1
### Check the output
display(df.describe())
#display(df.head(24))
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Price_Returns_6m']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
#export_df
###Output
_____no_output_____
###Markdown
Imputation of Trailing 12 month MOVING AVERAGE Price Performance
###Code
df = csv_data.copy()
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need
df = df[['Date', 'Ticker','Price_Returns_1m']]
#display(df)
### Shift by __ months and calculate rolling average
df['1m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(0)
df['2m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(1)
df['3m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(2)
df['4m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(3)
df['5m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(4)
df['6m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(5)
df['7m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(6)
df['8m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(7)
df['9m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(8)
df['10m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(9)
df['11m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(10)
df['12m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(11)
### Perform the chained performance calc
df['Moving_Avg_Returns_12m'] = ( df['1m']+df['2m']+df['3m']+df['4m']+df['5m']+df['6m']+df['7m']+df['8m']+df['9m']+df['10m']+df['11m']+df['12m'] )/12
### Check the output
display(df.describe())
#display(df.head(24))
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Moving_Avg_Returns_12m']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
#export_df
###Output
_____no_output_____
###Markdown
Imputation of Trailing 6 month MOVING AVERAGE Price Performance
###Code
df = csv_data.copy()
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need
df = df[['Date', 'Ticker','Price_Returns_1m']]
#display(df)
### Shift by __ months and calculate rolling average
df['1m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(0)
df['2m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(1)
df['3m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(2)
df['4m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(3)
df['5m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(4)
df['6m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(5)
# df['7m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(6)
# df['8m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(7)
# df['9m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(8)
# df['10m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(9)
# df['11m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(10)
# df['12m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(11)
### Perform the chained performance calc
df['Moving_Avg_Returns_6m'] = ( df['1m']+df['2m']+df['3m']+df['4m']+df['5m']+df['6m'] )/6 #+df['7m']+df['8m']+df['9m']+df['10m']+df['11m']+df['12m'] )/12
### Check the output
display(df.describe())
#display(df.head(24))
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Moving_Avg_Returns_6m']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
#export_df
###Output
_____no_output_____
###Markdown
Imputation of Trailing 3 month MOVING AVERAGE Price Performance
###Code
df = csv_data.copy()
#df = df[df.Ticker == 'IBM'] #Sanity check using one stock
### Limit to just the columns we need
df = df[['Date', 'Ticker','Price_Returns_1m']]
#display(df)
### Shift by __ months and calculate rolling average
df['1m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(0)
df['2m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(1)
df['3m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(2)
# df['4m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(3)
# df['5m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(4)
# df['6m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(5)
# df['7m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(6)
# df['8m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(7)
# df['9m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(8)
# df['10m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(9)
# df['11m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(10)
# df['12m'] = df.groupby(['Ticker'], axis=0)['Price_Returns_1m'].shift(11)
### Perform the chained performance calc
df['Moving_Avg_Returns_3m'] = ( df['1m']+df['2m']+df['3m'] )/3 #+df['4m']+df['5m']+df['6m'] )/6 #+df['7m']+df['8m']+df['9m']+df['10m']+df['11m']+df['12m'] )/12
### Check the output
display(df.describe())
#display(df.head(24))
display(df.tail(24))
### merge new field with starting csv data so we can export
df = df[['Date','Ticker','Moving_Avg_Returns_3m']] #limit to just the columns we need to merge
#export_df = csv_data.copy() #copy of starting data
export_df = export_df.merge(df, how='left', on=['Date','Ticker'])
#export_df
###Output
_____no_output_____
###Markdown
Look at the Export dataframe before exporting
###Code
print(export_df.columns.to_list())
### Look at the export dataframe
column_order = [ 'Date', 'Ticker', 'Name', 'Sector', 'Price', 'Price_Returns_12m', 'Price_Returns_6m', 'Price_Returns_3m', 'Price_Returns_1m', 'Moving_Avg_Returns_12m', 'Moving_Avg_Returns_6m', 'Moving_Avg_Returns_3m','Trail_DivYld', 'PB', 'Trail_EV_EBITDA', 'Trail_PE', 'Trail3yrAvg_EPSgro', 'Trail3yrAvg_DPSgro', 'Volatility', 'Debt_to_MktCap', 'NetDebt_EBITDA', 'Trail1yr_EPSgro', 'Trail1yr_DPSgro',]
export_df = export_df[column_order].copy()
export_df
### export to Google Drive
# export_df.to_csv("/content/gdrive/Shareddrives/Milestone2/SP500_Index_Data_with_Added_Features.csv", index=False)
###Output
_____no_output_____
###Markdown
Scratchpad Area for examining the export_df
###Code
#export_df[['Price_Returns_12m', 'Price_Returns_6m', 'Price_Returns_3m', 'Price_Returns_1m', 'Moving_Avg_Returns_12m', 'Moving_Avg_Returns_6m', 'Moving_Avg_Returns_3m']].corr()
print(export_df.columns.to_list())
### view corr table for price features
export_df[['Price_Returns_12m', 'Price_Returns_6m', 'Price_Returns_3m', 'Price_Returns_1m', 'Moving_Avg_Returns_12m', 'Moving_Avg_Returns_6m', 'Moving_Avg_Returns_3m']].corr().round(2)
### view corr table for NON-price features
export_df[['Trail_DivYld', 'PB', 'Trail_EV_EBITDA', 'Trail_PE', 'Trail3yrAvg_EPSgro', 'Trail3yrAvg_DPSgro', 'Volatility', 'Debt_to_MktCap', 'NetDebt_EBITDA', 'Trail1yr_EPSgro', 'Trail1yr_DPSgro']].corr().round(2)
### export corr tables to csv, put into excel, and make the colors look nice
# export_df[['Price_Returns_12m', 'Price_Returns_6m', 'Price_Returns_3m', 'Price_Returns_1m', 'Moving_Avg_Returns_12m', 'Moving_Avg_Returns_6m', 'Moving_Avg_Returns_3m']].corr().to_csv("/content/gdrive/Shareddrives/Milestone2/performance_feature_corr.csv", index=True)
# export_df[['Trail_DivYld', 'PB', 'Trail_EV_EBITDA', 'Trail_PE', 'Trail3yrAvg_EPSgro', 'Trail3yrAvg_DPSgro', 'Volatility', 'Debt_to_MktCap', 'NetDebt_EBITDA', 'Trail1yr_EPSgro', 'Trail1yr_DPSgro']].corr().to_csv("/content/gdrive/Shareddrives/Milestone2/fundamental_feature_corr.csv", index=True)
###Output
_____no_output_____ |
tutorial/tutorial05_metadata_preprocessing.ipynb | ###Markdown
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.- **MetaAttachmentType :** a transformer which creates an 'attachment type' feature extracted from regex in metadata. It extracts the extensions of attached files.- **Dummifier :** a transformer to dummifies categorial features.All the classes have **fit_transform** methods. Input dataframe - To use a **MetaExtension** transformer : the dataframe requires a **from** column- To use a **MetaDate** transformer : the dataframe requires a **date** column- To use a **MetaAttachmentType** transformer : the dataframe requires a **attachment** column with the list of attached files
###Code
from melusine.data.data_loader import load_email_data
import ast
df_emails = load_email_data()
df_emails = df_emails[['from','date', 'attachment']]
df_emails['from']
df_emails['date']
df_emails['attachment'] = df_emails['attachment'].apply(ast.literal_eval)
df_emails['attachment']
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
###Code
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension
###Output
_____no_output_____
###Markdown
MetaDate transformer A **MetaDate transformer** creates new features from dates : hour, minute and dayofweek
###Code
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0]
###Output
_____no_output_____
###Markdown
MetaAttachmentType transformer A **MetaAttachmentType transformer** creates an *attachment_type* feature extracted from an attachment names list. It extracts the extensions of attachments files.
###Code
from melusine.prepare_email.metadata_engineering import MetaAttachmentType
meta_pj = MetaAttachmentType()
df_emails = meta_pj.fit_transform(df_emails)
df_emails.attachment_type
###Output
_____no_output_____
###Markdown
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify.
###Code
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension','attachment_type', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
df_meta.to_csv('./data/metadata.csv', index=False, encoding='utf-8', sep=';')
###Output
_____no_output_____
###Markdown
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.- **Dummifier :** a transformer to dummifies categorial features.All the classes have **fit_transform** methods. Input dataframe - To use a **MetaExtension** transformer : the dataframe requires a **from** column- To use a **MetaDate** transformer : the dataframe requires a **date** column
###Code
from melusine.data.data_loader import load_email_data
df_emails = load_email_data()
df_emails = df_emails[['from','date']]
df_emails['from']
df_emails['date']
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
###Code
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaDate transformer** creates new features from dates : **hour**, **minute** and **dayofweek**.
###Code
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0]
###Output
_____no_output_____
###Markdown
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify.
###Code
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
###Output
_____no_output_____
###Markdown
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.- **Dummifier :** a transformer to dummifies categorial features.All the classes have **fit_transform** methods. Input dataframe - To use a **MetaExtension** transformer : the dataframe requires a **from** column- To use a **MetaDate** transformer : the dataframe requires a **date** column
###Code
from melusine.data.data_loader import load_email_data
df_emails = load_email_data()
df_emails = df_emails[['from','date']]
df_emails['from']
df_emails['date']
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
###Code
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaDate transformer** creates new features from dates : **hour**, **minute** and **dayofweek**.
###Code
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0]
###Output
_____no_output_____
###Markdown
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify.
###Code
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
###Output
_____no_output_____
###Markdown
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.- **MetaAttachmentType :** a transformer which creates an 'attachment type' feature extracted from regex in metadata. It extracts the extensions of attached files.- **Dummifier :** a transformer to dummifies categorial features.All the classes have **fit_transform** methods. Input dataframe - To use a **MetaExtension** transformer : the dataframe requires a **from** column- To use a **MetaDate** transformer : the dataframe requires a **date** column- To use a **MetaAttachmentType** transformer : the dataframe requires a **attachment** column with the list of attached files
###Code
from melusine.data.data_loader import load_email_data
import ast
df_emails = load_email_data(type="preprocessed")
df_emails['from'].head(2)
df_emails['date'].head(2)
df_emails['attachment'].head(2)
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
###Code
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails["extension"].head(5)
###Output
_____no_output_____
###Markdown
MetaDate transformer A **MetaDate transformer** creates new features from dates : hour, minute and dayofweek
###Code
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.dayofweek[0]
###Output
_____no_output_____
###Markdown
MetaAttachmentType transformer A **MetaAttachmentType transformer** creates an *attachment_type* feature extracted from an attachment names list. It extracts the extensions of attachments files.
###Code
from melusine.prepare_email.metadata_engineering import MetaAttachmentType
meta_pj = MetaAttachmentType()
df_emails = meta_pj.fit_transform(df_emails)
df_emails.attachment_type.head(2)
###Output
_____no_output_____
###Markdown
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify.
###Code
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension','attachment_type', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
###Output
_____no_output_____
###Markdown
Combine meta features with emails dataFrame
###Code
import pandas as pd
df_full = pd.concat([df_emails,df_meta],axis=1)
###Output
_____no_output_____
###Markdown
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.- **MetaAttachmentType :** a transformer which creates an 'attachment type' feature extracted from regex in metadata. It extracts the extensions of attached files.- **Dummifier :** a transformer to dummifies categorial features.All the classes have **fit_transform** methods. Input dataframe - To use a **MetaExtension** transformer : the dataframe requires a **from** column- To use a **MetaDate** transformer : the dataframe requires a **date** column- To use a **MetaAttachmentType** transformer : the dataframe requires a **attachment** column with the list of attached files
###Code
from melusine.data.data_loader import load_email_data
import ast
df_emails = load_email_data()
df_emails = df_emails[['from','date', 'attachment']]
df_emails['from']
df_emails['date']
df_emails['attachment'] = df_emails['attachment'].apply(ast.literal_eval)
df_emails['attachment']
###Output
_____no_output_____
###Markdown
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses.
###Code
from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension
###Output
_____no_output_____
###Markdown
MetaDate transformer A **MetaDate transformer** creates new features from dates : hour, minute and dayofweek
###Code
from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0]
###Output
_____no_output_____
###Markdown
MetaAttachmentType transformer A **MetaAttachmentType transformer** creates an *attachment_type* feature extracted from an attachment names list. It extracts the extensions of attachments files.
###Code
from melusine.prepare_email.metadata_engineering import MetaAttachmentType
meta_pj = MetaAttachmentType()
df_emails = meta_pj.fit_transform(df_emails)
df_emails.attachment_type
###Output
_____no_output_____
###Markdown
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify.
###Code
from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension','attachment_type', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head()
df_meta.to_csv('./data/metadata.csv', index=False, encoding='utf-8', sep=';')
###Output
_____no_output_____ |
natural_language_processing/01L/text_generator.ipynb | ###Markdown
sandard radom coice 2_grams
###Code
%%time
K = 70
def load_data(file_name):
with open(file_name) as f:
res = [x.split() for x in f if int(x.split()[0]) > K]
print(len(res))
return res
grams2_data = load_data('poleval_2grams.txt')
data_m = [x[1:] for x in grams2_data]
struct_2gram = {k[0]: set() for k in data_m}
for x in data_m:
struct_2gram[x[0]].add(x[1])
def generate_text(struct):
word = '<BOS>'
for i in range(50):
if word in struct:
temp = struct[word]
# print(temp)
word = random.choice(list(temp))
else:
word = random.choice(list(struct.keys()))
print(word, end=' ')
generate_text(struct_2gram)
###Output
_____no_output_____
###Markdown
3_grams
###Code
%%time
grams3_data = load_data('poleval_3grams.txt')
struct_3gram = {(k[1], k[2]): set() for k in grams3_data if len(k) >= 4}
for x in grams3_data:
if len(x) >= 4:
# print(x)
struct_3gram[(x[1], x[2])].add(x[3])
def genereate_3gram_text(struct):
last_words = random.choice(list(struct.keys()))
for i in range(50):
if last_words in struct:
new_word = random.choice(list(struct[last_words]))
last_words = (last_words[1], new_word)
else:
last_words = (last_words[1], random.choice(list(struct.keys()))[0])
print(last_words[-1], end=' ')
genereate_3gram_text(struct_3gram)
###Output
_____no_output_____
###Markdown
Random choice with weigths 2grams
###Code
struct_2gram_with_prop = {k[1]: {'prop': [], 'values': []} for k in grams2_data}
for x in grams2_data:
data = struct_2gram_with_prop[x[1]]
data['prop'].append(int(x[0]))
data['values'].append(x[2])
def random_choice_next(data):
propapbilites = np.array(data['prop'])
i = np.random.choice(len(data['values']), p=propapbilites / np.sum(propapbilites))
return data['values'][i]
def generate_2gram_withprop_text(struct):
word = '<BOS>'
for i in range(50):
if word in struct:
temp = struct[word]
# print(temp)
word = random_choice_next(temp)
else:
word = random.choice(list(struct.keys()))
print(word, end=' ')
generate_2gram_withprop_text(struct_2gram_with_prop)
###Output
_____no_output_____
###Markdown
3grams
###Code
struct_3gram_with_prop = {(k[1], k[2]): {'prop': [], 'values': []} for k in grams3_data if len(k) >= 4}
for x in grams3_data:
if len(x) >= 4:
data = struct_3gram_with_prop[(x[1], x[2])]
data['prop'].append(int(x[0]))
data['values'].append(x[3])
def generate_3gram_withprop_text(struct):
last_words = random.choice(list(struct.keys()))
for i in range(50):
if last_words in struct:
new_word = random_choice_next(struct[last_words])
last_words = (last_words[1], new_word)
else:
last_words = (last_words[1], random.choice(list(struct.keys()))[0])
print(last_words[-1], end=' ')
generate_3gram_withprop_text(struct_3gram_with_prop)
###Output
_____no_output_____ |
carsalescraper.ipynb | ###Markdown
Carsales.com.au Web Scraper for Toyota Dealer Used Vehicle in Australia
###Code
import datetime
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import requests
from bs4 import BeautifulSoup
import time
###Output
_____no_output_____
###Markdown
Function will take in a Vehicle Line as a string and return a Dataframe with Car title, price, Kilometers, Body, Transmission and Engine details.Example below is for a land cruiser
###Code
def toyota_scraper(vline):
soupy=[]
offsets=[]
for item in range(12,972):
if item % 12 == 0:
offsets.append(int(item))
urls=['https://www.carsales.com.au/cars/dealer/used/toyota/'+vline+'/?sort=Price&offset=0']
for i in offsets:
offset = i
url = "https://www.carsales.com.au/cars/dealer/used/toyota/"+vline+"/?sort=Price&offset="+str(offset)
urls.append(url)
for item in urls:
headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'}
response = requests.get(item, headers=headers, timeout=(3,3))
if response.status_code == 200:
txt=response.text
soupy.append(txt)
results=[BeautifulSoup(item, "html5lib") for item in soupy]
everything=[]
cars_list=[]
price_list=[]
kms_list=[]
body_list=[]
transmission_list=[]
engine_list=[]
for item in results:
x=item.find_all("div",class_="card-body")
odometer=item.find_all("li" ,class_="key-details__value")[0::4]
for km in odometer:
kms_list.append(km.text)
body=item.find_all("li" ,class_="key-details__value")[1::4]
for bd in body:
body_list.append(bd.text)
trans=item.find_all("li" ,class_="key-details__value")[2::4]
for ts in trans:
transmission_list.append(ts.text)
engine=item.find_all("li" ,class_="key-details__value")[3::4]
for en in engine:
engine_list.append(en.text)
for car in x:
prices=car.find_all("div",class_="price")
price=prices[0].text
price_list.append(price)
car_title=car.find_all("h3")
car_1=car_title[0].text
cars_list.append(car_1)
df = pd.DataFrame({'car_title': cars_list, 'price': price_list,'kms':kms_list,'body':body_list,'transmission':transmission_list,'engine':engine_list})
df['car_title'] = df['car_title'].replace(r'\s+|\\n', ' ', regex=True)
df['price'] = df['price'].replace(r'\s+|\\n', ' ', regex=True)
df['price'] = df['price'].str.replace(',', '').str.replace('$', '').str.replace('*', '').astype(int)
df['kms'] = df['kms'].str.replace(',', '').str.replace('km', '')
return df
df_land_cruiser=toyota_scraper('landcruiser')
df_land_cruiser.head(20)
###Output
_____no_output_____ |
notebooks/make_plot_from_exp_data.ipynb | ###Markdown
Make Plot from Experimental DataBuild a plot from some data that we have supplied ahead of time.
###Code
import pickle
import numpy as np
import scipy.optimize
import matplotlib.pyplot as plt
import seaborn
seaborn.set_context(context='paper',font_scale=2.5)
%matplotlib inline
###Output
_____no_output_____
###Markdown
First, load the data.
###Code
with open('../results/static/sample_exp_data.pickle','rb') as f:
results = pickle.load(f)
###Output
_____no_output_____
###Markdown
Next, fit the data with `curve_fit` module in SciPy.
###Code
def linear_fit(x,a,b):
return a*x + b
pop,covar = scipy.optimize.curve_fit(linear_fit,results['x'],results['y'])
###Output
_____no_output_____
###Markdown
Now, pull out the fitted curve
###Code
y_fit = linear_fit(results['x'],*pop)
###Output
_____no_output_____
###Markdown
Finally, build the plot and save it.
###Code
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
ax.scatter(results['x'],results['y'],marker='.',s=200,label='data',color=seaborn.color_palette('deep')[0])
ax.plot(results['x'],y_fit,linewidth=3,color=seaborn.color_palette('deep')[2],label='fit')
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.legend(loc='best')
plt.savefig(__dest__,format='pdf',dpi=1000)
plt.show()
###Output
_____no_output_____ |
notebooks/exploratory/Classification Pipeline.ipynb | ###Markdown
Building the Pipeline
###Code
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
#onehot_encoder = OneHotEncoder(sparse=False)
#t_train = onehot_encoder.fit_transform(train_data['species'].values.reshape(len(train_data['species']), 1))
label_encoder = LabelEncoder()
t_train = label_encoder.fit_transform(train_data['species'])
x_train = train_data.drop("id", axis=1)
N, M = x_train.shape
print("Training Data:", N)
print("Dimension:", M)
X_train, X_test, y_train, y_test = train_test_split(x_train.iloc[:, 1:].values,
t_train,
test_size=0.25,
random_state=10)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
pipeline = Pipeline([
('Standardization', StandardScaler()), # Step 1 - Normalize data (z-score)
('clf', LogisticRegression(max_iter=1000)) # Step 2 - Classifier
])
print(pipeline.steps)
###Output
[('Standardization', StandardScaler()), ('clf', LogisticRegression(max_iter=1000))]
###Markdown
Trying Logistic Regression ClassifierUse Cross-Validation to test the accuracy of the pipeline
###Code
from sklearn.model_selection import cross_validate
scores = cross_validate(pipeline, X_train, y_train)
print(scores)
print("Average accuracy of pipeline with Logistic Regression:", "%.2f" % (scores['test_score'].mean()*100), "%")
###Output
Average accuracy of pipeline with Logistic Regression: 98.52 %
###Markdown
Trying out other classification algorithms
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
clfs = []
clfs.append(LogisticRegression(max_iter=1000))
clfs.append(SVC())
clfs.append(KNeighborsClassifier())
clfs.append(DecisionTreeClassifier())
clfs.append(RandomForestClassifier())
#clfs.append(GradientBoostingClassifier())
for classifier in clfs:
pipeline.set_params(clf=classifier)
scores = cross_validate(pipeline, X_train, y_train)
print('-----------------------------------------------')
print(str(classifier))
print('-----------------------------------------------')
for key, values in scores.items():
print(key, 'mean ', values.mean())
print(key, 'std ', values.std())
###Output
/home/mlussier/.venv_ift712/lib/python3.6/site-packages/sklearn/model_selection/_split.py:672: UserWarning: The least populated class in y has only 4 members, which is less than n_splits=5.
% (min_groups, self.n_splits)), UserWarning)
###Markdown
Cross-Validation and Hyper-parameters Tuning
###Code
from sklearn.model_selection import GridSearchCV
pipeline.set_params(clf=SVC()) #SVC() LogisticRegression(max_iter=1000)
print(pipeline.steps)
parameters = {
'clf__kernel': ['linear', 'rbf', 'poly'],
'clf__gamma': [1, 0.75, 0.5, 0.25, 0.1, 0.01, 0.001],
'clf__C': np.linspace(0.1, 1.2, 12)
}
cv_grid = GridSearchCV(pipeline, param_grid=parameters)
cv_grid.fit(X_train, y_train)
###Output
/home/mlussier/.venv_ift712/lib/python3.6/site-packages/sklearn/model_selection/_split.py:672: UserWarning: The least populated class in y has only 4 members, which is less than n_splits=5.
% (min_groups, self.n_splits)), UserWarning)
###Markdown
Best combinations of the parameters can be accessed from **best_params_**
###Code
print("Best Parameters from Grid Search")
print(cv_grid.best_params_)
cv_grid.best_estimator_
cv_grid.best_score_
###Output
_____no_output_____
###Markdown
Test set prediction
###Code
y_predict = cv_grid.predict(X_test)
accuracy = accuracy_score(y_test, y_predict)
print("Accuracy of the best classifier after CV is %.3f%%" % (accuracy*100))
y_predict
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
# Load and split the data
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
# Construct some pipelines
pipe_lr = Pipeline([('scl', StandardScaler()),
('clf', LogisticRegression(random_state=42))])
pipe_lr_pca = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(random_state=42))])
pipe_rf = Pipeline([('scl', StandardScaler()),
('clf', RandomForestClassifier(random_state=42))])
pipe_rf_pca = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', RandomForestClassifier(random_state=42))])
pipe_svm = Pipeline([('scl', StandardScaler()),
('clf', svm.SVC(random_state=42))])
pipe_svm_pca = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', svm.SVC(random_state=42))])
# Set grid search params
param_range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
param_range_fl = [1.0, 0.5, 0.1]
grid_params_lr = [{'clf__penalty': ['l1', 'l2'],
'clf__C': param_range_fl,
'clf__solver': ['liblinear']}]
grid_params_rf = [{'clf__criterion': ['gini', 'entropy'],
'clf__min_samples_leaf': param_range,
'clf__max_depth': param_range,
'clf__min_samples_split': param_range[1:]}]
grid_params_svm = [{'clf__kernel': ['linear', 'rbf'],
'clf__C': param_range}]
# Construct grid searches
jobs = -1
gs_lr = GridSearchCV(estimator=pipe_lr,
param_grid=grid_params_lr,
scoring='accuracy',
cv=10)
gs_lr_pca = GridSearchCV(estimator=pipe_lr_pca,
param_grid=grid_params_lr,
scoring='accuracy',
cv=10)
gs_rf = GridSearchCV(estimator=pipe_rf,
param_grid=grid_params_rf,
scoring='accuracy',
cv=10,
n_jobs=jobs)
gs_rf_pca = GridSearchCV(estimator=pipe_rf_pca,
param_grid=grid_params_rf,
scoring='accuracy',
cv=10,
n_jobs=jobs)
gs_svm = GridSearchCV(estimator=pipe_svm,
param_grid=grid_params_svm,
scoring='accuracy',
cv=10,
n_jobs=jobs)
gs_svm_pca = GridSearchCV(estimator=pipe_svm_pca,
param_grid=grid_params_svm,
scoring='accuracy',
cv=10,
n_jobs=jobs)
# List of pipelines for ease of iteration
grids = [gs_lr, gs_lr_pca, gs_rf, gs_rf_pca, gs_svm, gs_svm_pca]
# Dictionary of pipelines and classifier types for ease of reference
grid_dict = {0: 'Logistic Regression', 1: 'Logistic Regression w/PCA',
2: 'Random Forest', 3: 'Random Forest w/PCA',
4: 'Support Vector Machine', 5: 'Support Vector Machine w/PCA'}
# Fit the grid search objects
print('Performing model optimizations...')
best_acc = 0.0
best_clf = 0
best_gs = ''
for idx, gs in enumerate(grids):
print('\nEstimator: %s' % grid_dict[idx])
# Fit grid search
gs.fit(X_train, y_train)
# Best params
print('Best params: %s' % gs.best_params_)
# Best training data accuracy
print('Best training accuracy: %.3f' % gs.best_score_)
# Predict on test data with best params
y_pred = gs.predict(X_test)
# Test data accuracy of model with best params
print('Test set accuracy score for best params: %.3f ' % accuracy_score(y_test, y_pred))
# Track best (highest test accuracy) model
if accuracy_score(y_test, y_pred) > best_acc:
best_acc = accuracy_score(y_test, y_pred)
best_gs = gs
best_clf = idx
print('\nClassifier with best test set accuracy: %s' % grid_dict[best_clf])
###Output
_____no_output_____ |
examples/tutorials/colabs/habitat2_gym_tutorial.ipynb | ###Markdown
Habitat 2.0 Gym APIThis tutorial covers how to use Habitat 2.0 environments as standard gym environments.See [here for Habitat 2.0 installation instructions and more tutorials.](https://aihabitat.org/docs/habitat2/)
###Code
%%capture
# @title Install Dependencies (if on Colab) { display-mode: "form" }
# @markdown (double click to show code)
import os
if "COLAB_GPU" in os.environ:
print("Setting up Habitat")
!curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/main/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s
import os
if "COLAB_GPU" in os.environ:
print("Setting Habitat base path")
%env HABLAB_BASE_CFG_PATH=/content/habitat-lab
import importlib
import PIL
importlib.reload(PIL.TiffTags)
# Video rendering utility.
from habitat_sim.utils import viz_utils as vut
# Quiet the Habitat simulator logging
os.environ["MAGNUM_LOG"] = "quiet"
os.environ["HABITAT_SIM_LOG"] = "quiet"
# If the import block below fails due to an error like "'PIL.TiffTags' has no attribute
# 'IFD'", then restart the Colab runtime instance and rerun this cell and the previous cell.
# The ONLY two lines you need to add to start importing Habitat 2.0 Gym environments.
import gym
# flake8: noqa
import habitat.utils.gym_definitions
###Output
_____no_output_____
###Markdown
Simple ExampleThis example sets up the Pick task in render mode which includes a high resolution camera in the scene for visualization.
###Code
env = gym.make("HabitatRenderPick-v0")
video_file_path = "data/example_interact.mp4"
video_writer = vut.get_fast_video_writer(video_file_path, fps=30)
done = False
env.reset()
while not done:
obs, reward, done, info = env.step(env.action_space.sample())
video_writer.append_data(env.render("rgb_array"))
video_writer.close()
if vut.is_notebook():
vut.display_video(video_file_path)
env.close()
###Output
_____no_output_____
###Markdown
Environment OptionsTo create the environment in performance mode remove `Render` from the environment ID string. The environment ID follows the format: `Habitat[Render?][Task Name]-v0`. All the supported environment IDs are listed below. The `Render` option can always be added to include the higher resolution 3rd POV camera for visualization.* Skills: * `HabitatPick-v0` * `HabitatPlace-v0` * `HabitatCloseCab-v0` * `HabitatCloseFridge-v0` * `HabitatOpenCab-v0` * `HabitatOpenFridge-v0` * `HabitatNavToObj-v0` * `HabitatReachState-v0`* Home Assistant Benchmark (HAB) tasks: * `HabitatTidyHouse-v0` * `HabitatPrepareGroceries-v0` * `HabitatSetTable-v0` * `HabitatNavPick-v0` * `HabitatNavPickNavPlace-v0`The Gym environments are automatically registered from the RL training configurations under ["habitat_baselines/config/rearrange"](https://github.com/facebookresearch/habitat-lab/tree/main/habitat_baselines/config/rearrange). The `GYM_AUTO_NAME` key in the YAML file determines the `[Task Name]`. The observation keys in `RL.GYM_OBS_KEYS` are what is returned in the observation space. If the the observations are a set of 1D arrays, then the observation space is automatically flattened. For example, in `HabitatReachState-v0` the observation space is `RL.GYM_OBS_KEYS = ['joint', 'relative_resting_position']`. `joint` is a 7D array and `relative_resting_position` is a 3D array. These two arrays are concatenated automatically to give a `10D` observation space. On the other hand, in environments with image observations, the observation is returned as a dictionary.An example of these different observation spaces is demonstrated below:
###Code
# Dictionary observation space
env = gym.make("HabitatPick-v0")
print(
"Pick observation space",
{k: v.shape for k, v in env.observation_space.spaces.items()},
)
env.close()
# Array observation space
env = gym.make("HabitatReachState-v0")
print("Reach observation space", env.observation_space)
env.close()
###Output
_____no_output_____
###Markdown
Environment ConfigurationYou can also modify the config specified in the YAML file through `gym.make` by passing the `override_options` argument. Here is an example of changing the gripper type to use the suction grasp in the Pick Task.
###Code
env = gym.make(
"HabitatPick-v0",
override_options=[
"TASK.ACTIONS.ARM_ACTION.GRIP_CONTROLLER",
"SuctionGraspAction",
],
)
print("Action space with suction grip", env.action_space)
env.close()
###Output
_____no_output_____
###Markdown
Habitat 2.0 Gym APIThis tutorial covers how to use Habitat 2.0 environments as standard gym environments.See [here for Habitat 2.0 installation instructions](https://colab.research.google.com/github/facebookresearch/habitat-lab/blob/hab_suite/examples/tutorials/colabs/Habitat2_Quickstart.ipynbscrollTo=50rOVwceXvzL)
###Code
%%capture
# @title Install Dependencies (if on Colab) { display-mode: "form" }
# @markdown (double click to show code)
import os
if "COLAB_GPU" in os.environ:
print("Setting up Habitat")
!curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/main/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s
# Setup to use the hab_suite branch of Habitat Lab.
! cd /content/habitat-lab && git remote set-branches origin 'hab_suite' && git fetch -v && git checkout hab_suite && cd /content/habitat-lab && python setup.py develop --all && pip install . && cd -
import os
if "COLAB_GPU" in os.environ:
print("Setting Habitat base path")
%env HABLAB_BASE_CFG_PATH=/content/habitat-lab
import importlib
import PIL
importlib.reload(PIL.TiffTags)
# Video rendering utility.
from habitat_sim.utils import viz_utils as vut
# Quiet the Habitat simulator logging
os.environ["MAGNUM_LOG"] = "quiet"
os.environ["HABITAT_SIM_LOG"] = "quiet"
# If the import block below fails due to an error like "'PIL.TiffTags' has no attribute
# 'IFD'", then restart the Colab runtime instance and rerun this cell and the previous cell.
# The ONLY two lines you need to add to start importing Habitat 2.0 Gym environments.
import gym
# flake8: noqa
import habitat.utils.gym_definitions
###Output
_____no_output_____
###Markdown
Simple ExampleThis example sets up the Pick task in render mode which includes a high resolution camera in the scene for visualization.
###Code
env = gym.make("HabitatRenderPick-v0")
video_file_path = "data/example_interact.mp4"
video_writer = vut.get_fast_video_writer(video_file_path, fps=30)
done = False
env.reset()
while not done:
obs, reward, done, info = env.step(env.action_space.sample())
video_writer.append_data(env.render("rgb_array"))
video_writer.close()
if vut.is_notebook():
vut.display_video(video_file_path)
env.close()
###Output
_____no_output_____
###Markdown
Environment OptionsTo create the environment in performance mode remove `Render` from the environment ID string. The environment ID follows the format: `Habitat[Render?][Task Name]-v0`. All the supported environment IDs are listed below. The `Render` option can always be added to include the higher resolution 3rd POV camera for visualization.* Skills: * `HabitatPick-v0` * `HabitatPlace-v0` * `HabitatCloseCab-v0` * `HabitatCloseFridge-v0` * `HabitatOpenCab-v0` * `HabitatOpenFridge-v0` * `HabitatNavToObj-v0` * `HabitatReachState-v0`* Home Assistant Benchmark (HAB) tasks: * `HabitatTidyHouse-v0` * `HabitatPrepareGroceries-v0` * `HabitatSetTable-v0` * `HabitatNavPick-v0` * `HabitatNavPickNavPlace-v0`The Gym environments are automatically registered from the RL training configurations under ["habitat_baselines/config/rearrange"](https://github.com/facebookresearch/habitat-lab/tree/main/habitat_baselines/config/rearrange). The `GYM_AUTO_NAME` key in the YAML file determines the `[Task Name]`. The observation keys in `RL.GYM_OBS_KEYS` are what is returned in the observation space. If the the observations are a set of 1D arrays, then the observation space is automatically flattened. For example, in `HabitatReachState-v0` the observation space is `RL.GYM_OBS_KEYS = ['joint', 'relative_resting_position']`. `joint` is a 7D array and `relative_resting_position` is a 3D array. These two arrays are concatenated automatically to give a `10D` observation space. On the other hand, in environments with image observations, the observation is returned as a dictionary.An example of these different observation spaces is demonstrated below:
###Code
# Dictionary observation space
env = gym.make("HabitatPick-v0")
print(
"Pick observation space",
{k: v.shape for k, v in env.observation_space.spaces.items()},
)
env.close()
# Array observation space
env = gym.make("HabitatReachState-v0")
print("Reach observation space", env.observation_space)
env.close()
###Output
_____no_output_____
###Markdown
Environment ConfigurationYou can also modify the config specified in the YAML file through `gym.make` by passing the `override_options` argument. Here is an example of changing the gripper type to use the suction grasp in the Pick Task.
###Code
env = gym.make(
"HabitatPick-v0",
override_options=[
"TASK.ACTIONS.ARM_ACTION.GRIP_CONTROLLER",
"SuctionGraspAction",
],
)
print("Action space with suction grip", env.action_space)
env.close()
###Output
_____no_output_____ |
Image/hsv_pan_sharpen.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
view_state = pdk.ViewState(longitude=-61.61625, latitude=-11.64273, zoom=14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
ee_layers.append(EarthEngineLayer(ee_object=rgb, vis_params=visparams))
ee_layers.append(EarthEngineLayer(ee_object=upres, vis_params=visparams))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# There are many fine places to look here is one. Comment
# this out if you want to twiddle knobs while panning around.
Map.setCenter(-61.61625, -11.64273, 14)
# Grab a sample L7 image and pull out the RGB and pan bands
# in the range (0, 1). (The range of the pan band values was
# chosen to roughly match the other bands.)
image1 = ee.Image('LANDSAT/LE7/LE72300681999227EDC00')
rgb = image1.select('B3', 'B2', 'B1').unitScale(0, 255)
gray = image1.select('B8').unitScale(0, 155)
# Convert to HSV, swap in the pan band, and convert back to RGB.
huesat = rgb.rgbToHsv().select('hue', 'saturation')
upres = ee.Image.cat(huesat, gray).hsvToRgb()
# Display before and after layers using the same vis parameters.
visparams = {'min': [.15, .15, .25], 'max': [1, .9, .9], 'gamma': 1.6}
Map.addLayer(rgb, visparams, 'Orignal')
Map.addLayer(upres, visparams, 'Pansharpened')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____ |
agreg/public2018_D1.ipynb | ###Markdown
Table of Contents 1 Texte d'oral de modélisation - Agrégation Option Informatique1.1 Préparation à l'agrégation - ENS de Rennes, 2017-181.2 À propos de ce document1.3 Question de programmation1.3.1 Modélisation1.3.2 Exercice1.4 Solution1.4.1 Types et représentations1.4.2 Calcul récursif du nombre &x03C1;" role="presentation">ρρ\rho1.4.3 Algorithme demandé pour décorer l'arbre1.5 Complexités1.5.1 En espace1.5.2 En temps1.6 Implémentations supplémentaires1.6.1 Évaluation des expressions1.6.2 Evaluation par lecture postfix et pile1.6.3 Affichage dans ce langage de manipulation de registre1.6.4 Méthode d'Ershov1.7 Conclusion1.7.1 Qualités1.7.2 Bonus1.7.3 Défauts Texte d'oral de modélisation - Agrégation Option Informatique Préparation à l'agrégation - ENS de Rennes, 2017-18- *Date* : 6 décembre 2017- *Auteur* : [Lilian Besson](https://GitHub.com/Naereen/notebooks/)- *Texte*: Annale 2018, ["Expressions arithmétiques" (public2018-D1)](http://agreg.org/Textes/public2018-D1.pdf) À propos de ce document- Ceci est une *proposition* de correction, partielle et probablement non-optimale, pour la partie implémentation d'un [texte d'annale de l'agrégation de mathématiques, option informatique](http://Agreg.org/Textes/).- Ce document est un [notebook Jupyter](https://www.Jupyter.org/), et [est open-source sous Licence MIT sur GitHub](https://github.com/Naereen/notebooks/tree/master/agreg/), comme les autres solutions de textes de modélisation que [j](https://GitHub.com/Naereen)'ai écrite cette année.- L'implémentation sera faite en OCaml, version 4+ :
###Code
Sys.command "ocaml -version";;
print_endline Sys.ocaml_version;;
###Output
The OCaml toplevel, version 4.04.2
###Markdown
---- Question de programmationLa question de programmation pour ce texte était donnée en fin de page 6 : ModélisationOn est libre de choisir l'implémentation qui nous convient pour les expressions arithmétiques sous forme arborescente.Je choisis de ne considérer que les variables et pas les valeurs (on aura des expressions en OCaml comme `F("x")` pour la variable $x$), et uniquement des arbres binaires.Les noeuds `N(t1, op, t2)` sont étiquetées par un opérateur binaire `op`, dont on fournit à l'avance une liste fixée et finie, et les feuilles `F(s)` sont étiquetées par une variable `s`. Exercice> « Écrire une fonction qui reçoit en argument une expression algébrique donnée sous formearborescente et décore cette expression en calculant pour chaque nœud interne quelle estla valeur du paramètre ρ et quelle branche doit être évaluée en premier selon l’algorithmed’Ershov. » ---- SolutionOn va essayer d'être rapide et de faire simple, donc on choisit une algèbre de termes particulière, mais l'algorithme d'Ershov sera implémenté de façon générique (polymorphique, peu importe la valeur de `op`). Types et représentations
###Code
type operateur = Plus | Moins | MoinsDroite | Mul | Div | DivDroite | Modulo | Expo ;;
(* On utilisera MoinsDroite et DivDroite pour la compilation avec la méthode d'Ershov *)
type ('a, 'b) arbre_binaire = N of (('a,'b) arbre_binaire) * 'b * (('a,'b) arbre_binaire) | F of 'a
###Output
_____no_output_____
###Markdown
Par exemple pour l'expression $\frac{x - yz}{u - vw}$, c'est-à-dire `(x - y*z)/(u - v*w)` :
###Code
(* exp1 = (x - y*z) *)
let exp1 =
N(
F("x"),
Moins,
N(
F("y"),
Mul,
F("z")
)
)
;;
(* exp2 = (u - v*w) *)
let exp2 =
N(
F("u"),
Moins,
N(
F("v"),
Mul,
F("w")
)
)
;;
(* exp3 = (x - y*z)/(u - v*w) *)
let exp3 =
N(
exp1,
Div,
exp2
)
###Output
_____no_output_____
###Markdown
Calcul récursif du nombre $\rho$C'est assez immédiat, en suivant la définition récursive :$\rho(F) = 0$ et $\rho(N(t_1, t_2)) = \rho(t_1) + 1$ si $\rho(t_1) = \rho(t_2)$ et $\max(\rho(t_1), \rho(t_2))$ si $\rho(t_1) \neq \rho(t_2)$.
###Code
let rec nombre_rho (expr : ('a, 'b) arbre_binaire) : int =
match expr with
| F _ -> 0
| N(t1, _, t2) ->
let d1, d2 = nombre_rho t1, nombre_rho t2 in
if d1 = d2 then
d1 + 1
else
max d1 d2
;;
###Output
_____no_output_____
###Markdown
Pour comparer avec le calcul, plus simple, de la hauteur de l'arbre :
###Code
let rec hauteur (expr : ('a, 'b) arbre_binaire) : int =
match expr with
| F _ -> 0
| N(t1, _, t2) ->
let d1, d2 = hauteur t1, hauteur t2 in
1 + (max d1 d2)
;;
###Output
_____no_output_____
###Markdown
Exemples qui concordent avec le texte :
###Code
let _ = hauteur exp1;;
let _ = nombre_rho exp1;;
let _ = hauteur exp2;;
let _ = nombre_rho exp2;;
let _ = hauteur exp3;;
let _ = nombre_rho exp3;;
###Output
_____no_output_____
###Markdown
Algorithme demandé pour décorer l'arbre On choisit d'ajouter une *décoration* de type `'c` :
###Code
type ('a, 'b, 'c) arbre_binaire_decore = N2 of ('c * (('a, 'b, 'c) arbre_binaire_decore) * 'b * (('a, 'b, 'c) arbre_binaire_decore)) | F2 of 'a
###Output
_____no_output_____
###Markdown
On a besoin d'attacher à chaque noeud son paramètre $\rho$ et un drapeau binaire permettant de savoir si l'algorithme d'Ershov indique d'évaluer en premier le sous-arbre gauche (`premier_gauche = true`) ou droite (`= false`).
###Code
type decoration = {
rho : int;
premier_gauche : bool;
};;
let rec decore (expr : (('a, 'b) arbre_binaire)) : (('a, 'b, decoration) arbre_binaire_decore) =
match expr with
| F v -> F2 v
| N (t1, o, t2) ->
let d1, d2 = nombre_rho t1, nombre_rho t2 in
let d = if d1 = d2 then d1 + 1 else max d1 d2 in
N2({rho = d; premier_gauche = (d2<= d1)}, (decore t1), o, (decore t2))
;;
###Output
_____no_output_____
###Markdown
Dans nos exemples, on voit que l'évaluation favorise en premier (avec des `premier_gauche = false`) les expressions les plus profondes (à droite) au sens du paramètre $\rho$ :
###Code
decore exp1;;
decore exp2;;
decore exp3;;
###Output
_____no_output_____
###Markdown
Complexités En espaceLes deux fonctions présentées ci-dessus n'utilisent pas d'autre espace que l'arbre décoré, et la pile d'appel récursif.- Le calcul récursif de $\rho(t)$ prend donc un espace proportionnel au nombre d'appel récursif imbriqué, qui est borné par la taille du terme $t$ (définie comme le nombre de noeuds et de feuilles), donc est **linéaire**,- Le calcul de la méthode d'Ershov est elle aussi **linéaire** puisque l'arbre décoré est de même taille que l'arbre initial. En tempsLes deux fonctions présentées ci-dessus sont **linéaires** dans la taille de l'arbre. Implémentations supplémentairesOn peut essayer d'implémenter une fonction qui afficherait ceci pour la méthode d'évaluation naturelle :Et ceci pour la méthode d'Ershov :Ce n'est pas trop difficile, mais ça prend un peu de temps :- on va d'abord montrer comment évaluer les expressions, en lisant l'arbre et en effectuant des appels récursifs,- puis on fera un parcours postfix de l'arbre afin d'utiliser l'évaluation avec une pile, avec la stratégie naïve,- et enfin la méthode d'Ershov permettra de réduire la hauteur maximale de la pile, en passant de $h(t)$ (hauteur de l'arbre) à $\rho(t)$,- en bonus, on affichera les instructions style "compilateur à registre", pour visualiser le gain apporté par la méthode d'Ershov.> Bien sûr, tout cela fait beaucoup trop pour être envisagé le jour de l'oral ! Mais un des points aurait pû être implémenté rapidement. Évaluation des expressionsUn premier objectif plus simple est d'évaluer les expressions, en fournissant un *contexte* (table associant une valeur à chaque variable).
###Code
type ('a, 'b) contexte = ('a * 'b) list;;
let valeur (ctx : ('a, 'b) contexte) (var : 'a) = List.assoc var ctx;;
(* une Hashtbl peut etre utilisee si besoin de bonnes performances *)
let contexte1 : (string, int) contexte = [
("x", 1); ("y", 2); ("z", 3);
("u", 4); ("v", 5); ("w", 6)
];;
let intop_of_op (op : operateur) : (int -> int -> int) =
match op with
| Plus -> ( + )
| Moins -> ( - )
| MoinsDroite -> (fun v1 -> fun v2 -> v2 - v1)
| Mul -> ( * )
| Div -> ( / )
| DivDroite -> (fun v1 -> fun v2 -> v2 / v1)
| Modulo -> ( mod )
| Expo ->
(fun v1 -> fun v2 -> int_of_float ((float_of_int v1) ** (float_of_int v2)))
;;
let rec eval_int (ctx : (string, int) contexte) (expr : (string, operateur) arbre_binaire) : int =
match expr with
| F(s) -> valeur ctx s
| N(t1, op, t2) ->
let v1, v2 = eval_int ctx t1, eval_int ctx t2 in
(intop_of_op op) v1 v2
;;
###Output
_____no_output_____
###Markdown
Par exemple, $x$ vaut $1$ dans le contexte d'exemple, et $x + y$ vaut $1 + 2 = 3$ :
###Code
let _ = eval_int contexte1 (F("x"));;
let _ = eval_int contexte1 (N(F("x"), Plus, F("y")));;
let _ = eval_int contexte1 exp1;;
let _ = eval_int contexte1 exp2;;
let _ = eval_int contexte1 exp3;;
###Output
_____no_output_____
###Markdown
On voit la faiblesse de l'interprétation avec des entiers, la division `/` est une division entière !On peut aussi interpréter avec des flottants :
###Code
let contexte2 : (string, float) contexte = [
("x", 1.); ("y", 2.); ("z", 3.);
("u", 4.); ("v", 5.); ("w", 6.)
];;
let floatop_of_op (op : operateur) : (float -> float -> float) =
match op with
| Plus -> ( +. )
| Moins -> ( -. )
| MoinsDroite -> (fun v1 -> fun v2 -> v2 -. v1)
| Mul -> ( *. )
| Div -> ( /. )
| DivDroite -> (fun v1 -> fun v2 -> v2 /. v1)
| Modulo ->
(fun v1 -> fun v2 -> float_of_int ((int_of_float v1) mod (int_of_float v2)))
| Expo -> ( ** )
;;
let rec eval_float (ctx : (string, float) contexte) (expr : (string, operateur) arbre_binaire) : float =
match expr with
| F(s) -> valeur ctx s
| N(t1, op, t2) ->
let v1, v2 = eval_float ctx t1, eval_float ctx t2 in
(floatop_of_op op) v1 v2
;;
###Output
_____no_output_____
###Markdown
Par exemple, $x$ vaut $1$ dans le contexte d'exemple, et $x + y$ vaut $1 + 2 = 3$ :
###Code
let _ = eval_float contexte2 (F("x"));;
let _ = eval_float contexte2 (N(F("x"), Plus, F("y")));;
let _ = eval_float contexte2 exp1;;
let _ = eval_float contexte2 exp2;;
let _ = eval_float contexte2 exp3;;
###Output
_____no_output_____
###Markdown
Evaluation par lecture postfix et pileOn va commencer par lire l'arbre en parcours postfix (cf. [TP2 @ ENS Rennes 2017/18](https://nbviewer.jupyter.org/github/Naereen/notebooks/tree/master/agreg/TP_Programmation_2017-18/TP2__OCaml.ipynb)) et ensuite l'évaluer grâce à une pile.
###Code
type ('a, 'b) lexem = O of 'b | V of 'a;;
type ('a, 'b) parcours = (('a, 'b) lexem) list;;
let parcours_postfix (expr : ('a, 'b) arbre_binaire) : (('a, 'b) parcours) =
let rec parcours vus expr =
match expr with
| F(s) -> V(s) :: vus
| N(t1, op, t2) -> O(op) :: (parcours (parcours vus t1) t2)
in
List.rev (parcours [] expr)
;;
###Output
_____no_output_____
###Markdown
On le teste :
###Code
parcours_postfix exp1;;
parcours_postfix exp3;;
let eval_int_2 (ctx : (string, int) contexte) (expr : (string, operateur) arbre_binaire) : int =
let vus = parcours_postfix expr in
let pile = Stack.create () in
let aux lex =
match lex with
| V(s) -> Stack.push (valeur ctx s) pile;
| O(op) ->
let v1, v2 = Stack.pop pile, Stack.pop pile in
Stack.push ((intop_of_op op) v1 v2) pile;
in
List.iter aux vus;
Stack.pop pile
;;
###Output
_____no_output_____
###Markdown
Par exemple, $x - y*z$ avec $x = 1, y = 2, z = 3$ vaut $1 - 2 * 3 = -5$ :
###Code
let _ = exp1 ;;
let _ = eval_int_2 contexte1 exp1;;
let _ = exp2;;
let _ = eval_int_2 contexte1 exp2;;
###Output
_____no_output_____
###Markdown
On peut maintenant faire la même fonction mais qui va en plus afficher l'état successif de la pile (avec des valeurs uniquement).
###Code
let print f =
let r = Printf.printf f in
flush_all();
r
;;
let print_pile pile =
print "\nPile : ";
Stack.iter (print "%i; ") pile;
print "."
;;
let eval_int_3 (ctx : (string, int) contexte) (expr : (string, operateur) arbre_binaire) : int =
let vus = parcours_postfix expr in
let pile = Stack.create () in
let aux lex =
print_pile pile;
match lex with
| V(s) -> Stack.push (valeur ctx s) pile;
| O(op) ->
let v1, v2 = Stack.pop pile, Stack.pop pile in
Stack.push ((intop_of_op op) v1 v2) pile;
in
List.iter aux vus;
Stack.pop pile
;;
let _ = exp1 ;;
let _ = eval_int_3 contexte1 exp1;;
let _ = exp3;;
let _ = eval_int_3 contexte1 exp3;;
###Output
_____no_output_____
###Markdown
> Il y a un soucis dans l'ordre d'affichage des lignes, dû à Jupyter et pas à notre fonction. On vérifie qu'on utilise au plus 4 valeurs sur la pile, comme représenté dans la figure de l'énoncé : Affichage dans ce langage de manipulation de registreOn ne va pas trop formaliser ça, mais juste les afficher...
###Code
let print_aff (line : int) (i : int) (s : string) : unit =
print "\n%02i: R[%d] := %s ;" line i s;
;;
let string_of_op (op : operateur) : string =
match op with
| Plus -> "+"
| Moins | MoinsDroite -> "-"
| Mul -> "*"
| Div | DivDroite -> "/"
| Modulo -> "%"
| Expo -> "^"
;;
let print_op (line : int) (i : int) (j : int) (k : int) (op : operateur) : unit =
match op with
| MoinsDroite | DivDroite -> (* on gère ici les opérateurs "inverses" *)
print "\n%02i: R[%d] := R[%d] %s R[%d] ;" line i k (string_of_op op) j;
| _ ->
print "\n%02i: R[%d] := R[%d] %s R[%d] ;" line i j (string_of_op op) k;
;;
let eval_int_4 (ctx : (string, int) contexte) (expr : (string, operateur) arbre_binaire) : int =
let vus = parcours_postfix expr in
let pile = Stack.create () in
let ligne = ref 0 in
let aux lex =
incr ligne;
match lex with
| V(s) ->
Stack.push (valeur ctx s) pile;
print_aff !ligne ((Stack.length pile) - 1) s;
| O(op) ->
let v1, v2 = Stack.pop pile, Stack.pop pile in
Stack.push ((intop_of_op op) v1 v2) pile;
print_op !ligne ((Stack.length pile) - 1) ((Stack.length pile) - 1) (Stack.length pile) op;
in
List.iter aux vus;
Stack.pop pile
;;
###Output
_____no_output_____
###Markdown
Essayons ça :
###Code
let _ = exp1 ;;
let _ = eval_int_4 contexte1 exp1;;
let _ = exp3;;
let _ = eval_int_4 contexte1 exp3;;
###Output
_____no_output_____
###Markdown
Méthode d'ErshovEnfin, on va générer, en plus de l'évaluation, un affichage comme celui qu'on voulait, qui montre les affectations dans les registres, et permettra de visualiser que la méthode d'Ershov utilise un registre de moins sur le terme exemple qui calcule $(x - y*z)/(u - v*w)$. On rappelle qu'on a obtenu un arbre binaire décoré, représenté comme tel :
###Code
decore exp1;;
###Output
_____no_output_____
###Markdown
On modifie notre parcours postfix pour prendre en compte la décoration et savoir si on calcul d'abord le sous-arbre gauche ou droit.
###Code
let parcours_postfix_decore (expr : ('a, 'b, decoration) arbre_binaire_decore) : (('a, 'b) parcours) =
let rec parcours vus expr =
match expr with
| F2(s) -> V(s) :: vus
| N2(dec, t1, Moins, t2) when dec.premier_gauche = false ->
O(MoinsDroite) :: (parcours (parcours vus t2) t1)
| N2(dec, t1, MoinsDroite, t2) when dec.premier_gauche = false ->
O(Moins) :: (parcours (parcours vus t2) t1)
| N2(dec, t1, Div, t2) when dec.premier_gauche = false ->
O(DivDroite) :: (parcours (parcours vus t2) t1)
| N2(dec, t1, DivDroite, t2) when dec.premier_gauche = false ->
O(Div) :: (parcours (parcours vus t2) t1)
| N2(dec, t1, op, t2) when dec.premier_gauche = false ->
O(op) :: (parcours (parcours vus t2) t1)
| N2(_, t1, op, t2) ->
O(op) :: (parcours (parcours vus t1) t2)
in
List.rev (parcours [] expr)
;;
let eval_int_ershov (ctx : (string, int) contexte) (expr : (string, operateur) arbre_binaire) : int =
let vus = parcours_postfix_decore (decore expr) in
let pile = Stack.create () in
let ligne = ref 0 in
let aux lex =
incr ligne;
match lex with
| V(s) ->
Stack.push (valeur ctx s) pile;
print_aff !ligne ((Stack.length pile) - 1) s;
| O(op) ->
let v1, v2 = Stack.pop pile, Stack.pop pile in
Stack.push ((intop_of_op op) v1 v2) pile;
print_op !ligne ((Stack.length pile) - 1) ((Stack.length pile) - 1) (Stack.length pile) op;
in
List.iter aux vus;
Stack.pop pile
;;
###Output
_____no_output_____
###Markdown
Essayons ça :
###Code
let _ = exp1 ;;
let _ = eval_int_ershov contexte1 exp1;;
let _ = exp3;;
let _ = eval_int_ershov contexte1 exp3;;
###Output
_____no_output_____ |
dataset_0/notebook/tensorflow-tutorial-and-housing-price-prediction.ipynb | ###Markdown
TensorFlow Tutorial and Boston Housing Price Prediction from MIT Deep Learning (TBU)[TensorFlow](https://tensorflow.org) This kernel has Boston Housing Price Prediction from MIT Deep Learning by Lex Fridman and TensorFlow tutorial for Beginners with Latest APIs that was designed by Aymeric Damien for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and source codes with explanation.> **Credits**: Thanks to **Lex Fridman's MIT Deep Learning**, **Aymeric Damien** and other contributers for such wonderful work! Here are some of *my kernel notebooks* for **Machine Learning and Data Science** as follows, ***Upvote*** them if you *like* them> * [Awesome Deep Learning Basics and Resources](https://www.kaggle.com/arunkumarramanan/awesome-deep-learning-resources)> * [Data Science with R - Awesome Tutorials](https://www.kaggle.com/arunkumarramanan/data-science-with-r-awesome-tutorials)> * [Data Science and Machine Learning Cheetcheets](https://www.kaggle.com/arunkumarramanan/data-science-and-machine-learning-cheatsheets)> * [Awesome ML Frameworks and MNIST Classification](https://www.kaggle.com/arunkumarramanan/awesome-machine-learning-ml-frameworks)> * [Awesome Data Science for Beginners with Titanic Exploration](https://kaggle.com/arunkumarramanan/awesome-data-science-for-beginners)> * [Data Scientist's Toolkits - Awesome Data Science Resources](https://www.kaggle.com/arunkumarramanan/data-scientist-s-toolkits-awesome-ds-resources)> * [Awesome Computer Vision Resources (TBU)](https://www.kaggle.com/arunkumarramanan/awesome-computer-vision-resources-to-be-updated)> * [Machine Learning and Deep Learning - Awesome Tutorials](https://www.kaggle.com/arunkumarramanan/awesome-deep-learning-ml-tutorials)> * [Data Science with Python - Awesome Tutorials](https://www.kaggle.com/arunkumarramanan/data-science-with-python-awesome-tutorials)> * [Awesome TensorFlow and PyTorch Resources](https://www.kaggle.com/arunkumarramanan/awesome-tensorflow-and-pytorch-resources)> * [Awesome Data Science IPython Notebooks](https://www.kaggle.com/arunkumarramanan/awesome-data-science-ipython-notebooks)> * [Machine Learning Engineer's Toolkit with Roadmap](https://www.kaggle.com/arunkumarramanan/machine-learning-engineer-s-toolkit-with-roadmap) > * [Hands-on ML with scikit-learn and TensorFlow](https://www.kaggle.com/arunkumarramanan/hands-on-ml-with-scikit-learn-and-tensorflow)> * [Practical Machine Learning with PyTorch](https://www.kaggle.com/arunkumarramanan/practical-machine-learning-with-pytorch)> * [Tensorflow Tutorial and House Price Prediction](https://www.kaggle.com/arunkumarramanan/tensorflow-tutorial-and-examples) Machine Learning ML Crash Course with TensorFlow APIs is highly recommended by Google as it's developed by googlers. * [Google Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/) Boston Housing Price Prediction with Feed Forward Neural Networks from MIT Deep LearningLet's start with using a fully-connected neural network to do predict housing prices. The following image highlights the difference between regression and classification (see part 2). Given an observation as input, **regression** outputs a continuous value (e.g., exact temperature) and classificaiton outputs a class/category that the observation belongs to.For the Boston housing dataset, we get 506 rows of data, with 13 features in each. Our task is to build a regression model that takes these 13 features as input and output a single value prediction of the "median value of owner-occupied homes (in $1000)."Now, we load the dataset. Loading the dataset returns four NumPy arrays:* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays. [tf.keras](https://www.tensorflow.org/guide/keras) is the simplest way to build and train neural network models in TensorFlow. So, that's what we'll stick with in this tutorial, unless the models neccessitate a lower-level API.Note that there's [tf.keras](https://www.tensorflow.org/guide/keras) (comes with TensorFlow) and there's [Keras](https://keras.io/) (standalone).
###Code
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
# Commonly used modules
import numpy as np
import os
import sys
# Images, plots, display, and visualization
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import cv2
import IPython
from six.moves import urllib
print(tf.__version__)
# Set common constants
this_repo_url = 'https://github.com/arunkumarramanan/mit-deep-learning/raw/master/'
this_tutorial_url = this_repo_url + 'tutorial_deep_learning_basics'
(train_features, train_labels), (test_features, test_labels) = keras.datasets.boston_housing.load_data()
# get per-feature statistics (mean, standard deviation) from the training set to normalize by
train_mean = np.mean(train_features, axis=0)
train_std = np.std(train_features, axis=0)
train_features = (train_features - train_mean) / train_std
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/boston_housing.npz
57344/57026 [==============================] - 0s 0us/step
###Markdown
Build the modelBuilding the neural network requires configuring the layers of the model, then compiling the model. First we stack a few layers together using `keras.Sequential`. Next we configure the loss function, optimizer, and metrics to monitor. These are added during the model's compile step:* *Loss function* - measures how accurate the model is during training, we want to minimize this with the optimizer.* *Optimizer* - how the model is updated based on the data it sees and its loss function.* *Metrics* - used to monitor the training and testing steps.Let's build a network with 1 hidden layer of 20 neurons, and use mean squared error (MSE) as the loss function (most common one for regression problems):
###Code
def build_model():
model = keras.Sequential([
Dense(20, activation=tf.nn.relu, input_shape=[len(train_features[0])]),
Dense(1)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='mse',
metrics=['mae', 'mse'])
return model
###Output
_____no_output_____
###Markdown
Train the modelTraining the neural network model requires the following steps:1. Feed the training data to the model—in this example, the `train_features` and `train_labels` arrays.2. The model learns to associate features and labels.3. We ask the model to make predictions about a test set—in this example, the `test_features` array. We verify that the predictions match the labels from the `test_labels` array. To start training, call the `model.fit` method—the model is "fit" to the training data:
###Code
# this helps makes our output less verbose but still shows progress
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
model = build_model()
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=50)
history = model.fit(train_features, train_labels, epochs=1000, verbose=0, validation_split = 0.1,
callbacks=[early_stop, PrintDot()])
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
# show RMSE measure to compare to Kaggle leaderboard on https://www.kaggle.com/c/boston-housing/leaderboard
rmse_final = np.sqrt(float(hist['val_mean_squared_error'].tail(1)))
print()
print('Final Root Mean Square Error on validation set: {}'.format(round(rmse_final, 3)))
###Output
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
.......................................................................................
Final Root Mean Square Error on validation set: 2.322
###Markdown
Now, let's plot the loss function measure on the training and validation sets. The validation set is used to prevent overfitting ([learn more about it here](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit)). However, because our network is small, the training convergence without noticeably overfitting the data as the plot shows.
###Code
def plot_history():
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [Thousand Dollars$^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'], label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'], label = 'Val Error')
plt.legend()
plt.ylim([0,50])
plot_history()
###Output
_____no_output_____
###Markdown
Next, compare how the model performs on the test dataset:
###Code
test_features_norm = (test_features - train_mean) / train_std
mse, _, _ = model.evaluate(test_features_norm, test_labels)
rmse = np.sqrt(mse)
print('Root Mean Square Error on test set: {}'.format(round(rmse, 3)))
###Output
102/102 [==============================] - 0s 121us/step
Root Mean Square Error on test set: 4.144
|
python/nano/notebooks/pytorch/cifar10/nano-inference-example.ipynb | ###Markdown
BigDL-Nano Inference Example--- This example shows the usage of bigdl-nano pytorch inference pipeline.
###Code
import os
from time import time
import torch
from torch import nn
import torch.nn.functional as F
import torchvision
from pl_bolts.datamodules import CIFAR10DataModule
from pl_bolts.transforms.dataset_normalizations import cifar10_normalization
from pytorch_lightning import LightningModule, seed_everything
from torch.optim.lr_scheduler import OneCycleLR
from torchmetrics.functional import accuracy
from bigdl.nano.pytorch.trainer import Trainer
from bigdl.nano.pytorch.vision import transforms
###Output
/opt/conda/envs/testNotebook/lib/python3.7/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
###Markdown
CIFAR10 Data Module---Import the existing data module from bolts and modify the train and test transforms.You could access [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) for a view of the whole dataset.
###Code
def prepare_data(data_path, batch_size, num_workers):
train_transforms = transforms.Compose(
[
transforms.RandomCrop(32, 4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
cifar10_normalization()
]
)
test_transforms = transforms.Compose(
[
transforms.ToTensor(),
cifar10_normalization()
]
)
cifar10_dm = CIFAR10DataModule(
data_dir=data_path,
batch_size=batch_size,
num_workers=num_workers,
train_transforms=train_transforms,
test_transforms=test_transforms,
val_transforms=test_transforms
)
return cifar10_dm
###Output
_____no_output_____
###Markdown
Resnet___Modify the pre-existing Resnet architecture from TorchVision. The pre-existing architecture is based on ImageNet images (224x224) as input. So we need to modify it for CIFAR10 images (32x32).
###Code
def create_model():
model = torchvision.models.resnet18(pretrained=False, num_classes=10)
model.conv1 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
model.maxpool = nn.Identity()
return model
###Output
_____no_output_____
###Markdown
Lightning Module___Check out the [configure_optimizers](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.htmlconfigure-optimizers) method to use custom Learning Rate schedulers. The OneCycleLR with SGD will get you to around 92-93% accuracy in 20-30 epochs and 93-94% accuracy in 40-50 epochs. Feel free to experiment with different LR schedules from https://pytorch.org/docs/stable/optim.htmlhow-to-adjust-learning-rate
###Code
class LitResnet(LightningModule):
def __init__(self, learning_rate=0.05):
super().__init__()
self.save_hyperparameters()
self.model = create_model()
self.example_input_array = torch.Tensor(64, 3, 32, 32)
def forward(self, x):
out = self.model(x)
return F.log_softmax(out, dim=1)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log("train_loss", loss)
return loss
def evaluate(self, batch, stage=None):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
if stage:
self.log(f"{stage}_loss", loss, prog_bar=True)
self.log(f"{stage}_acc", acc, prog_bar=True)
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log("val_loss", loss, prog_bar=True)
self.log("val_acc", acc, prog_bar=True)
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log_dict({'test_loss': loss, 'test_acc': acc})
def configure_optimizers(self):
optimizer = torch.optim.SGD(
self.parameters(),
lr=self.hparams.learning_rate,
momentum=0.9,
weight_decay=5e-4,
)
steps_per_epoch = 45000
scheduler_dict = {
"scheduler": OneCycleLR(
optimizer,
0.1,
epochs=self.trainer.max_epochs,
steps_per_epoch=steps_per_epoch,
),
"interval": "step",
}
return {"optimizer": optimizer, "lr_scheduler": scheduler_dict}
seed_everything(7)
PATH_DATASETS = os.environ.get("PATH_DATASETS", ".")
BATCH_SIZE = 64
NUM_WORKERS = 0
data_module = prepare_data(PATH_DATASETS, BATCH_SIZE, NUM_WORKERS)
SUBSET = int(os.environ.get('SUBSET', 1))
trainer = Trainer(progress_bar_refresh_rate=10)
###Output
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
###Markdown
Load Model---Load the LitResnet Model using the checkpoint saving using LightningModule after single process training in the nano-trainer-example.
###Code
pl_model = LitResnet.load_from_checkpoint('checkpoints/model.ckpt')
data_module.setup("test")
mask = list(range(0, len(data_module.test_dataloader().dataset), SUBSET))
test_set = torch.utils.data.Subset(data_module.test_dataloader().dataset, mask)
from torch.utils.data import DataLoader
test_dataloader = DataLoader(test_set, batch_size=BATCH_SIZE, shuffle=False, num_workers=NUM_WORKERS)
start = time()
pl_model.eval()
for x, _ in test_dataloader:
with torch.no_grad():
pl_model(x)
infer_time = time() - start
###Output
_____no_output_____
###Markdown
Get Accelerated Module---Use Train.trace from bigdl.nano.pytorch.trainer to convert a model into an accelerated module for inference.The definition of trace is:```trace(model: nn.Module, input_sample=None, accelerator=None) :param model: An torch.nn.Module model, including pl.LightningModule. :param input_sample: A set of inputs for trace, defaults to None if you have trace before or model is a LightningModule with an example_input_array. :param accelerator: The accelerator to use, defaults to None meaning staying in Pytorch backend. 'openvino' and 'onnxruntime' are supported for now. :return: Model with different acceleration(OpenVINO/ONNX Runtime).```
###Code
onnx_model = Trainer.trace(pl_model, accelerator="onnxruntime", input_sample=torch.Tensor(64, 3, 32, 32))
start = time()
for x, _ in test_dataloader:
inference_res_onnx = onnx_model(x)
onnx_infer_time = time() - start
openvino_model = Trainer.trace(pl_model, accelerator="openvino")
start = time()
for x, _ in test_dataloader:
openvino_model(x)
openvino_infer_time = time() - start
template = """
| Precision | Inference Time(s) |
| Pytorch | {:5.2f} |
| ONNX | {:5.2f} |
| Openvino | {:5.2f} |
"""
summary = template.format(
infer_time,
onnx_infer_time,
openvino_infer_time
)
print(summary)
###Output
| Precision | Inference Time(s) |
| Pytorch | 15.72 |
| ONNX | 7.65 |
| Openvino | 5.99 |
###Markdown
Quantize ModelUse Trainer.quantize from bigdl.nano.pytorch.trainer to calibrate a Pytorch-Lightning model for post-training quantization. Here are some parameters that might be useful to you:```:param pl_model: A Pytorch-Lightning model to be quantized.:param precision Global precision of quantized model, supported type: 'int8', 'bf16', 'fp16', defaults to 'int8'.:param accelerator: Use accelerator 'None', 'onnxruntime', 'openvino', defaults to None. None means staying in pytorch.:param calib_dataloader: A torch.utils.data.dataloader.DataLoader object for calibration. Required for static quantization.:param approach: 'static' or 'dynamic'. 'static': post_training_static_quant, 'dynamic': post_training_dynamic_quant. Default: 'static'.:input_sample: An input example to convert pytorch model into ONNX/OpenVINO. :return A accelerated Pytorch-Lightning Model if quantization is sucessful.```Access more details from [Source](https://github.com/intel-analytics/BigDL/blob/main/python/nano/src/bigdl/nano/pytorch/trainer/Trainer.pyL234)
###Code
i8_model = trainer.quantize(pl_model, calib_dataloader=test_dataloader)
start = time()
for x, _ in test_dataloader:
inference_res_i8 = i8_model(x)
i8_inference_time = time() - start
i8_acc = 0.0
fp32_acc = 0.0
for x, y in test_dataloader:
output_i8 = i8_model(x)
output_fp32 = pl_model(x)
i8_acc += accuracy(output_i8, y)
fp32_acc += accuracy(output_fp32, y)
i8_acc = i8_acc/len(test_dataloader)
fp32_acc = fp32_acc/len(test_dataloader)
template = """
| Precision | Inference Time(s) | Accuracy(%) |
| FP32 | {:5.2f} | {:5.4f} |
| INT8 | {:5.2f} | {:5.4f} |
| Improvement(%) | {:5.2f} | {:5.4f} |
"""
summary = template.format(
infer_time, fp32_acc,
i8_inference_time, i8_acc,
(1 - i8_inference_time /infer_time) * 100,
i8_acc - fp32_acc
)
print(summary)
###Output
| Precision | Inference Time(s) | Accuracy(%) |
| FP32 | 15.72 | 0.8619 |
| INT8 | 4.70 | 0.8608 |
| Improvement(%) | 70.09 | -0.0011 |
###Markdown
BigDL-Nano Inference Example--- This example shows the usage of bigdl-nano pytorch inference pipeline.
###Code
import os
from time import time
import torch
from torch import nn
import torch.nn.functional as F
import torchvision
from pl_bolts.datamodules import CIFAR10DataModule
from pl_bolts.transforms.dataset_normalizations import cifar10_normalization
from pytorch_lightning import LightningModule, seed_everything
from torch.optim.lr_scheduler import OneCycleLR
from torchmetrics.functional import accuracy
from bigdl.nano.pytorch.trainer import Trainer
from bigdl.nano.pytorch.vision import transforms
###Output
_____no_output_____
###Markdown
CIFAR10 Data Module---Import the existing data module from bolts and modify the train and test transforms.You could access [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) for a view of the whole dataset.
###Code
def prepare_data(data_path, batch_size, num_workers):
train_transforms = transforms.Compose(
[
transforms.RandomCrop(32, 4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
cifar10_normalization()
]
)
test_transforms = transforms.Compose(
[
transforms.ToTensor(),
cifar10_normalization()
]
)
cifar10_dm = CIFAR10DataModule(
data_dir=data_path,
batch_size=batch_size,
num_workers=num_workers,
train_transforms=train_transforms,
test_transforms=test_transforms,
val_transforms=test_transforms
)
return cifar10_dm
###Output
_____no_output_____
###Markdown
Resnet___Modify the pre-existing Resnet architecture from TorchVision. The pre-existing architecture is based on ImageNet images (224x224) as input. So we need to modify it for CIFAR10 images (32x32).
###Code
def create_model():
model = torchvision.models.resnet18(pretrained=False, num_classes=10)
model.conv1 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
model.maxpool = nn.Identity()
return model
###Output
_____no_output_____
###Markdown
Lightning Module___Check out the [configure_optimizers](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.htmlconfigure-optimizers) method to use custom Learning Rate schedulers. The OneCycleLR with SGD will get you to around 92-93% accuracy in 20-30 epochs and 93-94% accuracy in 40-50 epochs. Feel free to experiment with different LR schedules from https://pytorch.org/docs/stable/optim.htmlhow-to-adjust-learning-rate
###Code
class LitResnet(LightningModule):
def __init__(self, learning_rate=0.05):
super().__init__()
self.save_hyperparameters()
self.model = create_model()
self.example_input_array = torch.Tensor(64, 3, 32, 32)
def forward(self, x):
out = self.model(x)
return F.log_softmax(out, dim=1)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log("train_loss", loss)
return loss
def evaluate(self, batch, stage=None):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
if stage:
self.log(f"{stage}_loss", loss, prog_bar=True)
self.log(f"{stage}_acc", acc, prog_bar=True)
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log("val_loss", loss, prog_bar=True)
self.log("val_acc", acc, prog_bar=True)
def test_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
self.log_dict({'test_loss': loss, 'test_acc': acc})
def configure_optimizers(self):
optimizer = torch.optim.SGD(
self.parameters(),
lr=self.hparams.learning_rate,
momentum=0.9,
weight_decay=5e-4,
)
steps_per_epoch = 45000
scheduler_dict = {
"scheduler": OneCycleLR(
optimizer,
0.1,
epochs=self.trainer.max_epochs,
steps_per_epoch=steps_per_epoch,
),
"interval": "step",
}
return {"optimizer": optimizer, "lr_scheduler": scheduler_dict}
seed_everything(7)
PATH_DATASETS = os.environ.get("PATH_DATASETS", ".")
BATCH_SIZE = 64
NUM_WORKERS = 0
data_module = prepare_data(PATH_DATASETS, BATCH_SIZE, NUM_WORKERS)
trainer = Trainer(progress_bar_refresh_rate=10)
###Output
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
###Markdown
Load Model---Load the LitResnet Model using the checkpoint saving using LightningModule after single process training in the nano-trainer-example.
###Code
pl_model = LitResnet.load_from_checkpoint('checkpoints/renet18_single_none.ckpt')
data_module.setup("test")
start = time()
pl_model.eval()
for x, _ in data_module.test_dataloader():
with torch.no_grad():
pl_model(x)
infer_time = time() - start
###Output
_____no_output_____
###Markdown
Get Accelerated Module---Use Train.trace from bigdl.nano.pytorch.trainer to convert a model into an accelerated module for inference.The definition of trace is:```trace(model: nn.Module, input_sample=None, accelerator=None) :param model: An torch.nn.Module model, including pl.LightningModule. :param input_sample: A set of inputs for trace, defaults to None if you have trace before or model is a LightningModule with an example_input_array. :param accelerator: The accelerator to use, defaults to None meaning staying in Pytorch backend. 'openvino' and 'onnxruntime' are supported for now. :return: Model with different acceleration(OpenVINO/ONNX Runtime).```
###Code
onnx_model = Trainer.trace(pl_model, accelerator="onnxruntime", input_sample=torch.Tensor(64, 3, 32, 32))
start = time()
for x, _ in data_module.test_dataloader():
inference_res_onnx = onnx_model(x)
onnx_infer_time = time() - start
openvino_model = Trainer.trace(pl_model, accelerator="openvino")
start = time()
for x, _ in data_module.test_dataloader():
openvino_model(x)
openvino_infer_time = time() - start
template = """
| Precision | Inference Time(s) |
| Pytorch | {:5.2f} |
| ONNX | {:5.2f} |
| Openvino | {:5.2f} |
"""
summary = template.format(
infer_time,
onnx_infer_time,
openvino_infer_time
)
print(summary)
###Output
| Precision | Inference Time(s) |
| Pytorch | 15.72 |
| ONNX | 7.65 |
| Openvino | 5.99 |
###Markdown
Quantize ModelUse Trainer.quantize from bigdl.nano.pytorch.trainer to calibrate a Pytorch-Lightning model for post-training quantization. Here are some parameters that might be useful to you:```:param pl_model: A Pytorch-Lightning model to be quantized.:param precision Global precision of quantized model, supported type: 'int8', 'bf16', 'fp16', defaults to 'int8'.:param accelerator: Use accelerator 'None', 'onnxruntime', 'openvino', defaults to None. None means staying in pytorch.:param calib_dataloader: A torch.utils.data.dataloader.DataLoader object for calibration. Required for static quantization.:param approach: 'static' or 'dynamic'. 'static': post_training_static_quant, 'dynamic': post_training_dynamic_quant. Default: 'static'.:input_sample: An input example to convert pytorch model into ONNX/OpenVINO. :return A accelerated Pytorch-Lightning Model if quantization is sucessful.```Access more details from [Source](https://github.com/intel-analytics/BigDL/blob/main/python/nano/src/bigdl/nano/pytorch/trainer/Trainer.pyL234)
###Code
i8_model = trainer.quantize(pl_model, calib_dataloader=data_module.test_dataloader())
start = time()
for x, _ in data_module.test_dataloader():
inference_res_i8 = i8_model(x)
i8_inference_time = time() - start
i8_acc = 0.0
fp32_acc = 0.0
for x, y in data_module.test_dataloader():
output_i8 = i8_model(x)
output_fp32 = pl_model(x)
i8_acc += accuracy(output_i8, y)
fp32_acc += accuracy(output_fp32, y)
i8_acc = i8_acc/len(data_module.test_dataloader())
fp32_acc = fp32_acc/len(data_module.test_dataloader())
template = """
| Precision | Inference Time(s) | Accuracy(%) |
| FP32 | {:5.2f} | {:5.4f} |
| INT8 | {:5.2f} | {:5.4f} |
| Improvement(%) | {:5.2f} | {:5.4f} |
"""
summary = template.format(
infer_time, fp32_acc,
i8_inference_time, i8_acc,
(1 - i8_inference_time /infer_time) * 100,
i8_acc - fp32_acc
)
print(summary)
###Output
| Precision | Inference Time(s) | Accuracy(%) |
| FP32 | 15.72 | 0.8619 |
| INT8 | 4.70 | 0.8608 |
| Improvement(%) | 70.09 | -0.0011 |
|
py_serial/ranging_graph.ipynb | ###Markdown
RF IDs
###Code
node_ids = mp.rf_get_active_short_ids()
#utl.save_json_timestamp("config_twr",node_ids)
###Output
(RR) : (4)/(E2F96EB1D7A476CC)
(Green) : (2)/(530BE91D3559D690)
(RL) : (3)/(CBC216DC164B1DE8)
(Simple) : (1)/(5F7D70F99F462C99)
4 entries saved in ./test_db/2021.08.08/16-27-32 config_twr.json
###Markdown
Ranging
###Code
lists_list = [
("Green",["Simple","RR"]),
("RL",["Simple","RR"])
]
r_graph = atl.range_graph("test_square_2",lists_list)
mp.view(r_graph)
p_graph = atl.multilateration(r_graph)
mp.stop()
###Output
closing serial port
uart> COM35 : is Closed
|
dense_correspondence/experiments/normalize_descriptors/eval_normalize_descriptors.ipynb | ###Markdown
Bag of Tricks ExperimentAnalyze the effects of our different "tricks".1. Sample matches off mask2. Scale by hard negatives3. L2 pixel loss on matchesWe will compare standard network, networks missing one trick only, and a network without any tricks (i.e same as Tanner Schmidt)
###Code
import numpy as np
import os
import fnmatch
import pandas as pd
import sklearn.metrics as sm
import scipy.stats as ss
import matplotlib.pyplot as plt
import dense_correspondence_manipulation.utils.utils as utils
utils.add_dense_correspondence_to_python_path()
from dense_correspondence.evaluation.evaluation import DenseCorrespondenceEvaluationPlotter as DCEP
# folder_name = "trick_analysis"
# path_to_nets = os.path.join("/home/manuelli/code/data_volume/pdc/trained_models", folder_name)
# all_nets = sorted(os.listdir(path_to_nets))
# nets_to_plot = []
# for net in all_nets:
# # if "no_dr" in net:
# # continue
# nets_to_plot.append(os.path.join(folder_name,net))
# nets_list = []
# nets_to_plot = []
# nets_list.append("standard_3")
# nets_list.append("dont_scale_hard_negatives_3")
# nets_list.append("dont_sample_from_mask_3")
# nets_list.append("no_tricks_3")
# for net in nets_list:
# nets_to_plot.append(os.path.join(folder_name,net))
# # print nets_to_plot
# print nets_to_plot
# # nets_to_plot = ["starbot_1_train_3"]
nets_to_plot = []
nets_to_plot.append("trick_analysis/standard_3")
nets_to_plot.append("sandbox/normalize_descriptors_4")
###Output
_____no_output_____
###Markdown
Training
###Code
p = DCEP()
dc_source_dir = utils.getDenseCorrespondenceSourceDir()
network_name = nets_to_plot[0]
path_to_csv = os.path.join(dc_source_dir, "data_volume", "pdc", "trained_models", network_name, "analysis/train/data.csv")
fig_axes = DCEP.run_on_single_dataframe(path_to_csv, label=network_name, save=False)
for network_name in nets_to_plot[1:]:
path_to_csv = os.path.join(dc_source_dir, "data_volume", "pdc", "trained_models", network_name, "analysis/train/data.csv")
fig_axes = DCEP.run_on_single_dataframe(path_to_csv, label=network_name, previous_fig_axes=fig_axes, save=False)
_, axes = fig_axes
# axes[0].set_title("Training Set")
plt.show()
###Output
_____no_output_____
###Markdown
Test
###Code
p = DCEP()
dc_source_dir = utils.getDenseCorrespondenceSourceDir()
network_name = nets_to_plot[0]
path_to_csv = os.path.join(dc_source_dir, "data_volume", "pdc", "trained_models", network_name, "analysis/test/data.csv")
fig_axes = DCEP.run_on_single_dataframe(path_to_csv, label=network_name, save=False)
for network_name in nets_to_plot[1:]:
path_to_csv = os.path.join(dc_source_dir, "data_volume", "pdc", "trained_models", network_name, "analysis/test/data.csv")
fig_axes = DCEP.run_on_single_dataframe(path_to_csv, label=network_name, previous_fig_axes=fig_axes, save=False)
_, axes = fig_axes
# axes[0].set_title("Test Set")
plt.show()
###Output
_____no_output_____
###Markdown
Cross Scene Single Object
###Code
p = DCEP()
dc_source_dir = utils.getDenseCorrespondenceSourceDir()
network_name = nets_to_plot[0]
path_to_csv = os.path.join(dc_source_dir, "data_volume", "pdc", "trained_models", network_name, "analysis/cross_scene/data.csv")
fig_axes = DCEP.run_on_single_dataframe(path_to_csv, label=network_name, save=False)
for network_name in nets_to_plot[1:]:
path_to_csv = os.path.join(dc_source_dir, "data_volume", "pdc", "trained_models", network_name, "analysis/cross_scene/data.csv")
fig_axes = DCEP.run_on_single_dataframe(path_to_csv, label=network_name, previous_fig_axes=fig_axes, save=False)
_, axes = fig_axes
# axes[0].set_title("Cross Scene Set")
plt.show()
###Output
_____no_output_____
###Markdown
Separating Distinct Objects
###Code
# p = DCEP()
# dc_source_dir = utils.getDenseCorrespondenceSourceDir()
# analysis_folder = analysis_folders[0]
# path_to_csv = os.path.join(model_folder, analysis_folder,
# "across_object/data.csv")
# fig_axes = DCEP.run_on_single_dataframe_across_objects(path_to_csv, label=analysis_folder, save=False)
# for analysis_folder in analysis_folders[1:]:
# path_to_csv = os.path.join(model_folder,
# analysis_folder, "across_object/data.csv")
# fig_axes = DCEP.run_on_single_dataframe_across_objects(path_to_csv, label=analysis_folder, previous_fig_axes=fig_axes, save=False)
# _, axes = fig_axes
# # axes[0].set_title("Across Object")
# plt.show()
###Output
_____no_output_____ |
notebooks/autoencoders/MNIST10/n_anomaly_detector_noise_classifier_last.ipynb | ###Markdown
Load the classifier
###Code
from classifier import create_model
import os
from keras.utils.np_utils import to_categorical
classifier = create_model()
classifier.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'],)
weight_file = './weights/mnist_classifier.hd5'
if(os.path.exists(weight_file)):
classifier.load_weights(weight_file)
else:
classifier.fit(x_train, to_categorical(y_train),
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test, to_categorical(y_test)),
callbacks=[])
classifier.save_weights(weight_file)
# Make test set predictions
x_pred = classifier.predict(x_test)
x_pred_digit = np.argmax(x_pred, axis=1)
x_pred_conf = np.max(x_pred, axis=1)
###Output
_____no_output_____
###Markdown
Create the Autoencoders
###Code
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Dense, Reshape
from keras.regularizers import l1
from keras.models import Model
def create_model(i):
n_hidden = 256
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(2, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (7, 7, 32)
x = Flatten()(x)
encoded = Dense(n_hidden, activity_regularizer=l1(10e-8))(x)
# representation is now size n_hidden
x = Dense(7*7*32)(encoded)
x = Reshape((7, 7, 32))(x)
x = Conv2D(2, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
model = Model(input_img, decoded)
# Change layer names to prevent weight sharing
for i, layer in enumerate(model.layers):
layer.name = 'layer'+str(i)+'_model'
return model
import os, gc
autoencoders = [create_model(i) for i in range(10)]
for i in range(10):
print("Training autoencoder", i)
autoencoders[i].compile(optimizer='adadelta', loss='binary_crossentropy')
f = y_train == i
x_train_noisy_filtered = x_train_noisy[f]
x_train_filtered = x_train[f]
f = y_test == i
x_test_noisy_filtered = x_test_noisy[f]
x_test_filtered = x_test[f]
weight_file = './weights/mnist_autoencoder_digit_%d_binary_crossentropy_noise.hd5' % i
if(os.path.exists(weight_file)):
autoencoders[i].load_weights(weight_file)
else:
autoencoders[i].fit(x_train_noisy_filtered, x_train_filtered,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy_filtered, x_test_filtered),
callbacks=[])
autoencoders[i].save_weights(weight_file)
show_10_images(autoencoders[i].predict(x_test))
from keras.layers import Lambda
from keras.losses import binary_crossentropy
import keras.backend as K
input_ = Input(shape=(28, 28, 1))
predictions = [l(input_) for l in autoencoders]
losses = [
Lambda(lambda x: binary_crossentropy(
K.batch_flatten(input_),
K.batch_flatten(x))
)(p) for p in predictions]
# min_loss = Lambda(lambda a: K.min(a))(losses)
# anomaly_detector = Model(input_, min_loss)
anomaly_detector = Model(input_, losses)
def min_max_scale(value, min_, max_):
return (value - min_) / (max_ - min_)
def min_max_scale_array(values, mins, maxes):
return (values - mins) / (maxes - mins)
mins = []
maxes = []
for i in range(10):
losses = anomaly_detector.predict(x_train[y_train == i])
mins.append(np.min(losses))
maxes.append(np.max(losses))
mins = np.array(mins)
maxes = np.array(maxes)
from keras.layers import Lambda
from keras.losses import binary_crossentropy
import keras.backend as K
input_ = Input(shape=(28, 28, 1))
recon_ = Input(shape=(28, 28, 1))
recon_loss = Lambda(lambda x: binary_crossentropy(
K.batch_flatten(x[0]),
K.batch_flatten(x[1])
))([input_, recon_])
loss_calculator = Model(inputs=[input_, recon_], outputs=recon_loss)
def get_recons_loss(samples):
pred = classifier.predict(samples)
pred_class = np.argmax(pred, axis=1)
pred_conf = np.max(pred, axis=1)
recons_losses = np.array(anomaly_detector.predict(samples))
recons_losses = min_max_scale_array(recons_losses.T, mins, maxes)
recons_mins = np.min(recons_losses, axis=1)
recons_argmins = np.argmin(recons_losses, axis=1)
losses = recons_mins
losses[recons_argmins != pred_class] = 1e15
return losses, pred
###Output
_____no_output_____
###Markdown
Fashion MNIST
###Code
from keras.datasets import fashion_mnist
from sklearn.metrics import roc_auc_score
from sklearn.metrics import mean_squared_error
_, (fashion_x_test, _) = fashion_mnist.load_data()
fashion_x_test = fashion_x_test.astype('float32') / 255.
fashion_x_test = np.reshape(fashion_x_test, (len(x_test), 28, 28, 1))
recon_losses, preds = get_recons_loss(fashion_x_test)
show_10_images(fashion_x_test)
print("class")
print(np.argmax(preds[:10], axis=-1))
print("\nclassifier confidence (higher is more confident)")
print([round(v, 2) for v in np.max(preds[:10], axis=-1)])
print("\nnormalized recon loss (higher is worse)")
print([round(v, 2) for v in recon_losses[:10]])
###Output
_____no_output_____
###Markdown
Calc AUROC
###Code
labels = len(x_test) * [0] + len(fashion_x_test) * [1]
test_samples = np.concatenate((x_test, fashion_x_test))
losses, _ = get_recons_loss(test_samples)
print("AUROC:", roc_auc_score(labels, losses))
###Output
AUROC: 0.830850385
###Markdown
EMNIST Letters
###Code
from torchvision.datasets import EMNIST
emnist_letters = EMNIST('./', "letters", train=False, download=True)
emnist_letters = emnist_letters.test_data.numpy()
emnist_letters = emnist_letters.astype('float32') / 255.
emnist_letters = np.reshape(emnist_letters, (len(emnist_letters), 28, 28, 1))
emnist_letters = np.swapaxes(emnist_letters, 1, 2)
recon_losses, preds = get_recons_loss(emnist_letters)
show_10_images(emnist_letters)
print("class")
print(np.argmax(preds[:10], axis=-1))
print("\nclassifier confidence (higher is more confident)")
print([round(v, 2) for v in np.max(preds[:10], axis=-1)])
print("\nnormalized recon loss (higher is worse)")
print([round(v, 2) for v in recon_losses[:10]])
###Output
_____no_output_____
###Markdown
Calc AUROC
###Code
labels = len(x_test) * [0] + len(emnist_letters) * [1]
test_samples = np.concatenate((x_test, emnist_letters))
losses, _ = get_recons_loss(test_samples)
print("AUROC:", roc_auc_score(labels, losses))
###Output
AUROC: 0.8244365673076923
###Markdown
Gaussian noise
###Code
mnist_mean = np.mean(x_train)
mnist_std = np.std(x_train)
gaussian_data = np.random.normal(mnist_mean, mnist_std, size=(10000, 28, 28, 1))
recon_losses, preds = get_recons_loss(gaussian_data)
show_10_images(gaussian_data)
print("class")
print(np.argmax(preds[:10], axis=-1))
print("\nclassifier confidence (higher is more confident)")
print([round(v, 2) for v in np.max(preds[:10], axis=-1)])
print("\nnormalized recon loss (higher is worse)")
print([round(v, 2) for v in recon_losses[:10]])
###Output
_____no_output_____
###Markdown
Calc AUROC
###Code
labels = len(x_test) * [0] + len(gaussian_data) * [1]
test_samples = np.concatenate((x_test, gaussian_data))
losses, _ = get_recons_loss(test_samples)
print("AUROC:", roc_auc_score(labels, losses))
###Output
AUROC: 0.85517867
###Markdown
Uniform noise
###Code
import math
b = math.sqrt(3.) * mnist_std
a = -b + mnist_mean
b += mnist_mean
uniform_data = np.random.uniform(low=a, high=b, size=(10000, 28, 28, 1))
uniform_data = uniform_data - np.mean(uniform_data) + mnist_mean
# uniform_data = np.clip(uniform_data, 0., 255.)
recon_losses, preds = get_recons_loss(uniform_data)
show_10_images(uniform_data)
print("class")
print(np.argmax(preds[:10], axis=-1))
print("\nclassifier confidence (higher is more confident)")
print([round(v, 2) for v in np.max(preds[:10], axis=-1)])
print("\nnormalized recon loss (higher is worse)")
print([round(v, 2) for v in recon_losses[:10]])
###Output
_____no_output_____
###Markdown
Calc AUROC
###Code
labels = len(x_test) * [0] + len(uniform_data) * [1]
test_samples = np.concatenate((x_test, uniform_data))
losses, _ = get_recons_loss(test_samples)
print("AUROC:", roc_auc_score(labels, losses))
###Output
AUROC: 0.855307235
|
make_figures/05_rarefaction.ipynb | ###Markdown
RarefactionRarefaction curves plot the number of unique species (in our case, unique antibody clonotypes) against the total sample size. "Bending" of the rarefaction curve toward a horizontal line indicates the sampling depth is approaching saturation. In this notebook, we will generate the following plots: * Rarefaction curve (**Figure 2a**) * Rarefaction curve inset (**Figure 2a**)The following Python packages are required to run this notebook: * numpy * matplotlib * seaborn * [abutils](https://www.github.com/briney/abtools) They can be installed by running `pip install numpy matplotlib seaborn abutils`.
###Code
from __future__ import print_function, division
import os
import subprocess as sp
import sys
import tempfile
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from abutils.utils.pipeline import list_files, make_dir
from abutils.utils.progbar import progress_bar
%matplotlib inline
###Output
_____no_output_____
###Markdown
Subjects, input file, and colorsThe repeat observation subsampling data in our manuscript is included in this Github repository. However, if you've generated your own input data (either independently or using the code provided [**here**](LINK) using something other than the default output location) and you've saved that data to a different location, please modify the path/filename below.
###Code
# subjects
with open('../data_processing/data/subjects.txt') as f:
subjects = sorted(f.read().split())
# input file
input_path = '../data_processing/data/equal_fraction_downsampling/'
input_filename = 'clonotype-downsampling_duplicate-counts_vj-aa.txt'
input_file = os.path.join(input_path, input_filename)
# colors
colors = sns.hls_palette(10, s=0.9)
colors[3] = sns.hls_palette(11, s=0.9)[3]
colors[4] = sns.hls_palette(12, s=0.9)[5]
color_dict = {s: c for s, c in zip(subjects, colors)}
sns.palplot(colors)
###Output
_____no_output_____
###Markdown
Load data
###Code
with open(input_file) as f:
data = f.read()
xs, ys = [], []
for subject_chunk in data.split('#')[1:]:
xdata = []
ydata = []
subject = subject_chunk.split('>')[0].strip()
for fraction in subject_chunk.split('>')[1:]:
frac = float(fraction.split('\n')[0].strip())
yvals = []
for iteration in fraction.split('\n')[1:]:
if not iteration.strip():
continue
l = [i.strip().split(':') for i in iteration.strip().split()]
c = {int(k): float(v) for k, v in l}
obs = float(sum(c.values()))
total = float(sum([k * v for k, v in c.items()]))
yval = obs / total * frac
yvals.append(yval)
ydata.append(np.mean(yvals))
xdata.append(frac)
xs.append(xdata)
ys.append(ydata)
###Output
_____no_output_____
###Markdown
Rarefaction lineplotIf you'd like to save the figure file (rather than just showing inline without saving), comment out the `plt.show()` line and uncomment the final two lines of the below code block.
###Code
# initialize plot
sns.set_style('white')
plt.figure(figsize=(5, 5))
# plot rarefaction data
for xdata, ydata, subject, color in zip(xs, ys, subjects, colors):
plt.plot([0] + xdata, [0] + ydata, c=color, label=subject, linewidth=2, alpha=0.9)
# plot diagonal reference line
plt.plot((0, 1.1), (0, 1.1), 'k:', linewidth=1)
# style the plot
ax = plt.gca()
# plot legend
ax.legend(loc='upper left', fontsize=11.5)
# set axis limits and labels
ax.set_xlim((0, 1.05))
ax.set_ylim((0, 1.05))
ax.set_xlabel('Observed clonotypes (fraction)', fontsize=14)
ax.set_ylabel('Unique clonotypes (fraction)', fontsize=14)
# style ticks
ax.tick_params(axis='x', labelsize=12)
ax.tick_params(axis='y', which='major', labelsize=12, length=6, width=1.25, pad=12, right=False)
# hide top, left and right spines
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
# save or show
plt.show()
# plt.tight_layout()
# plt.savefig('./figures/rarefaction_per-subject_by-fraction.pdf')
###Output
_____no_output_____
###Markdown
Rarefaction insetIf you'd like to save the figure file (rather than just showing inline without saving), comment out the `plt.show()` line and uncomment the final two lines of the code block.
###Code
# initialize plot
sns.set_style('white')
plt.figure(figsize=(4, 4))
# plot rarefaction data
for xdata, ydata, subject, color in zip(xs, ys, subjects, colors):
plt.plot([0] + xdata, [0] + ydata, c=color, label=subject, linewidth=4.5, alpha=0.9)
# plot diagonal reference line
plt.plot((0, 1.1), (0, 1.1), 'k:', linewidth=3)
# style plot
ax = plt.gca()
# set axis limits
ax.set_xlim((0.85, 1.02))
ax.set_ylim((0.8, 0.97))
# style ticks
ax.set_xticks([0.85, 0.9, 0.95, 1], )
ax.set_xticklabels(['0.85', '0.9', '0.95', '1.0'], size=12)
ax.set_yticks([0.8, 0.85, 0.9, 0.95], )
ax.set_yticklabels(['0.8', '0.85', '0.9', '0.95'], size=12)
ax.tick_params(axis='x', labelsize=14, pad=8, length=6, width=2, top=False)
ax.tick_params(axis='y', labelsize=14, pad=8, length=6, width=2, right=False)
# style the spines
[i.set_linewidth(2) for i in ax.spines.values()]
# save or show
plt.show()
# plt.tight_layout()
# plt.savefig('./figures/rarefaction_per-subject_by-fraction_inset.pdf')
###Output
_____no_output_____ |
baseline_nb.ipynb | ###Markdown
[open in Kaggle](https://www.kaggle.com/meishidou/baseline-nb) Prerequisites
###Code
import json
import os
import random
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from tqdm import tqdm
import transformers
# package in . directory
from bichoice import utils
from baseline.model import BertForClassification
from baseline.data_processor import DataProcessor
assert transformers.__version__ == '4.1.1'
# declare a namespace
D = utils.GlobalSettings({
'DATADIR': './data/',
# select checkpoint-7
'MODELDIR': './outputs/baseline_output/checkpoint-7/',
})
# load training parameters
argD = utils.GlobalSettings(
torch.load(os.path.join(D.MODELDIR, 'training_args.bin')))
print('this model is trained with following hyper parameters:')
print(str(argD))
###Output
_____no_output_____
###Markdown
Show Some Examples
###Code
processor = DataProcessor(D.DATADIR)
tokenizer = transformers.BertTokenizer.from_pretrained(argD.model_name)
test = processor.dataset[2]
# select a literal example from test set
test_e = random.choice(test)
def show_baseline_example(e):
'''show all info of a single `baseline.InputExample` object`'''
print('text_a:')
print(' ', e.text_a)
print('text_b:')
print(' ', e.text_b)
print('text_c:')
print(' ', e.text_c)
print('label:', e.label)
print('guid:', e.guid)
# create several baseline examples from `test_e`
test_b_e = processor._create_examples([test_e], set_type='test')
for i, e in enumerate(test_b_e):
print('-----EXAMPLE{}-----'.format(i+1))
show_baseline_example(e)
###Output
_____no_output_____
###Markdown
Tokenizing Examples
###Code
def show_baseline_features(f):
'''show info of a single `baseline.InputFeatures` object'''
print('-----FIRST TOKEN SEQUENCE-----')
input_mask = np.asarray(f.input_mask)
input_ids = np.asarray(f.input_ids)[input_mask==1]
segment_ids = np.asarray(f.segment_ids)[input_mask==1]
first_sent = tokenizer.convert_ids_to_tokens(input_ids[segment_ids==0])
second_sent = tokenizer.convert_ids_to_tokens(input_ids[segment_ids==1])
print(''.join(first_sent))
print('-----SECOND TOKEN SEQUENCE-----')
print(''.join(second_sent))
test_f_e = processor.convert_examples_to_features(
test_b_e, argD.max_length, tokenizer)[0]
print('label:', test_b_e[0].label)
for i, f in enumerate(test_f_e):
print('-----EXAMPLE{}-----'.format(i+1))
show_baseline_features(f)
###Output
_____no_output_____
###Markdown
Infer with Baseline Model
###Code
# initialize model and load state dict from a checkpoint
device = 'cuda:0' # not compatible with cpu
model = BertForClassification(argD.model_name)
model.load_state_dict(torch.load(os.path.join(D.MODELDIR, 'model.bin')))
model.to(device)
model.eval()
b = processor.get_dataset(test_b_e, tokenizer, argD.max_length)[:]
b = tuple(t.to(device) for t in b)
with torch.no_grad():
output = model(input_ids=b[0],
attention_mask=b[1],
token_type_ids=b[2],
labels=b[3])
logits = output[1].detach().cpu().numpy()
pred = np.argmax(logits, axis=1)[0]
label = b[3][0]
options = [e.text_b for e in test_b_e]
print('infered answer:', options[pred])
print('correct answer:', options[label])
###Output
_____no_output_____ |
Scikit - 30 Ensemble, Bagging, Pasting, Boosting.ipynb | ###Markdown
Simple logistic regression
###Code
estimator_pipe = pipeline.Pipeline([
("preprocessing", preprocessing_pipe),
("est", linear_model.LogisticRegression(random_state=1
, solver="liblinear"))
])
param_grid = {
"est__C": np.random.random(10) + 1
}
gsearch = model_selection.GridSearchCV(estimator_pipe, param_grid, cv = 5
, verbose=1, n_jobs=8, scoring="accuracy")
gsearch.fit(X, y)
print("Best score: ", gsearch.best_score_
, "Best parameters: ", gsearch.best_params_)
###Output
Fitting 5 folds for each of 10 candidates, totalling 50 fits
###Markdown
Ensemble Classifier
###Code
log_clf = linear_model.LogisticRegression(C = 1.53
, solver= "liblinear", random_state=1)
rnd_clf = ensemble.RandomForestClassifier(max_depth=6
, n_estimators = 30, random_state=1)
svm_clf = svm.SVC(C = 1.0, gamma = 0.15, random_state=1)
estimator_pipe = pipeline.Pipeline([
("preprocessing", preprocessing_pipe),
("est", ensemble.VotingClassifier(voting="hard", estimators=
[('lr', log_clf),
('rf', rnd_clf),
('svm', svm_clf)
])
)
])
param_grid = {
"est__svm__C": np.linspace(1.0, 20, 10)
}
gsearch = model_selection.GridSearchCV(estimator_pipe, param_grid, cv = 5
, verbose=1, n_jobs=8, scoring="accuracy")
gsearch.fit(X, y)
print("Best score: ", gsearch.best_score_, "Best parameters: ", gsearch.best_params_)
###Output
[Parallel(n_jobs=8)]: Using backend LokyBackend with 8 concurrent workers.
###Markdown
AdaBoost Classifier
###Code
estimator_pipe = pipeline.Pipeline([
("preprocessing", preprocessing_pipe),
("est", ensemble.AdaBoostClassifier(
linear_model.LogisticRegression(random_state=1
, solver="liblinear")
, n_estimators=200
, algorithm="SAMME.R"
, learning_rate=0.051)
)
])
param_grid = {
"est__base_estimator__C": np.random.random(10) + 1
}
gsearch = model_selection.GridSearchCV(estimator_pipe, param_grid, cv = 5
, verbose=1, n_jobs=8, scoring="accuracy")
gsearch.fit(X, y)
print("Best score: ", gsearch.best_score_, "Best parameters: ", gsearch.best_params_)
###Output
[Parallel(n_jobs=8)]: Using backend LokyBackend with 8 concurrent workers.
###Markdown
Bagging classifier
###Code
estimator_pipe = pipeline.Pipeline([
("preprocessing", preprocessing_pipe),
("est", ensemble.BaggingClassifier(
tree.DecisionTreeClassifier(),
max_samples= 0.5,
n_estimators=50,
bootstrap=True,
oob_score=True)
)
])
param_grid = {
"est__base_estimator__max_depth": np.arange(5, 15)
}
gsearch = model_selection.GridSearchCV(estimator_pipe, param_grid, cv = 5
, verbose=1, n_jobs=8, scoring="accuracy")
gsearch.fit(X, y)
print("Best score: ", gsearch.best_score_, "Best parameters: ", gsearch.best_params_)
###Output
Fitting 5 folds for each of 10 candidates, totalling 50 fits
###Markdown
Gradient Boosted Model
###Code
estimator_pipe = pipeline.Pipeline([
("preprocessing", preprocessing_pipe),
("est", ensemble.GradientBoostingClassifier(random_state=1))
])
param_grid = {
"est__max_depth": np.arange(3, 10),
"est__learning_rate": np.linspace(0.01, 1, 10)
}
gsearch = model_selection.GridSearchCV(estimator_pipe, param_grid, cv = 5
, verbose=1, n_jobs=8, scoring="accuracy")
gsearch.fit(X, y)
print("Best score: ", gsearch.best_score_
, "Best parameters: ", gsearch.best_params_)
scores = pd.DataFrame(gsearch.cv_results_)
scores.head()
scores[scores.rank_test_score == 1]
###Output
_____no_output_____ |
08-Time-Series-Analysis/4-ARIMA-and-Seasonal-ARIMA.ipynb | ###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* Warning! This is a complicated topic! Remember that this is an optional notebook to go through and that to fully understand it you should read the supplemental links and watch the full explanatory walkthrough video. This notebook and the video lectures are not meant to be a full comprehensive overview of ARIMA, but instead a walkthrough of what you can use it for, so you can later understand why it may or may not be a good choice for Financial Stock Data.____ ARIMA and Seasonal ARIMA Autoregressive Integrated Moving AveragesThe general process for ARIMA models is the following:* Visualize the Time Series Data* Make the time series data stationary* Plot the Correlation and AutoCorrelation Charts* Construct the ARIMA Model* Use the model to make predictionsLet's go through these steps! Step 1: Get the Data (and format it)We will be using some data about monthly milk production, full details on it can be found [here](https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75!ds=22ox&display=line).Its saved as a csv for you already, let's load it up:
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('monthly-milk-production-pounds-p.csv')
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
** Clean Up**Let's clean this up just a little!
###Code
df.columns = ['Month','Milk in pounds per cow']
df.head()
# Weird last value at bottom causing issues
df.drop(168,axis=0,inplace=True)
df['Month'] = pd.to_datetime(df['Month'])
df.head()
df.set_index('Month',inplace=True)
df.head()
df.describe().transpose()
###Output
_____no_output_____
###Markdown
Step 2: Visualize the DataLet's visualize this data with a few methods.
###Code
df.plot()
timeseries = df['Milk in pounds per cow']
timeseries.rolling(12).mean().plot(label='12 Month Rolling Mean')
timeseries.rolling(12).std().plot(label='12 Month Rolling Std')
timeseries.plot()
plt.legend()
timeseries.rolling(12).mean().plot(label='12 Month Rolling Mean')
timeseries.plot()
plt.legend()
###Output
_____no_output_____
###Markdown
DecompositionETS decomposition allows us to see the individual parts!
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(df['Milk in pounds per cow'], freq=12)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
###Output
_____no_output_____
###Markdown
Testing for StationarityWe can use the Augmented [Dickey-Fuller](https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test) [unit root test](https://en.wikipedia.org/wiki/Unit_root_test).In statistics and econometrics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.Basically, we are trying to whether to accept the Null Hypothesis **H0** (that the time series has a unit root, indicating it is non-stationary) or reject **H0** and go with the Alternative Hypothesis (that the time series has no unit root and is stationary).We end up deciding this based on the p-value return.* A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.* A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.Let's run the Augmented Dickey-Fuller test on our data:
###Code
df.head()
from statsmodels.tsa.stattools import adfuller
result = adfuller(df['Milk in pounds per cow'])
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
# Store in a function for later use!
def adf_check(time_series):
"""
Pass in a time series, returns ADF report
"""
result = adfuller(time_series)
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
###Output
_____no_output_____
###Markdown
___________ Important Note!** We have now realized that our data is seasonal (it is also pretty obvious from the plot itself). This means we need to use Seasonal ARIMA on our model. If our data was not seasonal, it means we could use just ARIMA on it. We will take this into account when differencing our data! Typically financial stock data won't be seasonal, but that is kind of the point of this section, to show you common methods, that won't work well on stock finance data!**_____ DifferencingThe first difference of a time series is the series of changes from one period to the next. We can do this easily with pandas. You can continue to take the second difference, third difference, and so on until your data is stationary. ** First Difference **
###Code
df['Milk First Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(1)
adf_check(df['Milk First Difference'].dropna())
df['Milk First Difference'].plot()
###Output
_____no_output_____
###Markdown
** Second Difference **
###Code
# Sometimes it would be necessary to do a second difference
# This is just for show, we didn't need to do a second difference in our case
df['Milk Second Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(1)
adf_check(df['Milk Second Difference'].dropna())
df['Milk Second Difference'].plot()
###Output
_____no_output_____
###Markdown
** Seasonal Difference **
###Code
df['Seasonal Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(12)
df['Seasonal Difference'].plot()
# Seasonal Difference by itself was not enough!
adf_check(df['Seasonal Difference'].dropna())
###Output
Augmented Dickey-Fuller Test:
ADF Test Statistic : -2.33541931436
p-value : 0.160798805277
#Lags Used : 12
Number of Observations Used : 143
weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary
###Markdown
** Seasonal First Difference **
###Code
# You can also do seasonal first difference
df['Seasonal First Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(12)
df['Seasonal First Difference'].plot()
adf_check(df['Seasonal First Difference'].dropna())
###Output
Augmented Dickey-Fuller Test:
ADF Test Statistic : -5.03800227492
p-value : 1.86542343188e-05
#Lags Used : 11
Number of Observations Used : 143
strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary
###Markdown
Autocorrelation and Partial Autocorrelation PlotsAn autocorrelation plot (also known as a [Correlogram](https://en.wikipedia.org/wiki/Correlogram) ) shows the correlation of the series with itself, lagged by x time units. So the y axis is the correlation and the x axis is the number of time units of lag.So imagine taking your time series of length T, copying it, and deleting the first observation of copy 1 and the last observation of copy 2. Now you have two series of length T−1 for which you calculate a correlation coefficient. This is the value of of the vertical axis at x=1x=1 in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags x and this defines the plot.You will run these plots on your differenced/stationary data. There is a lot of great information for identifying and interpreting ACF and PACF [here](http://people.duke.edu/~rnau/arimrule.htm) and [here](https://people.duke.edu/~rnau/411arim3.htm). Autocorrelation InterpretationThe actual interpretation and how it relates to ARIMA models can get a bit complicated, but there are some basic common methods we can use for the ARIMA model. Our main priority here is to try to figure out whether we will use the AR or MA components for the ARIMA model (or both!) as well as how many lags we should use. In general you would use either AR or MA, using both is less common.* If the autocorrelation plot shows positive autocorrelation at the first lag (lag-1), then it suggests to use the AR terms in relation to the lag* If the autocorrelation plot shows negative autocorrelation at the first lag, then it suggests using MA terms. _____ Important Note! Here we will be showing running the ACF and PACF on multiple differenced data sets that have been made stationary in different ways, typically you would just choose a single stationary data set and continue all the way through with that.The reason we use two here is to show you the two typical types of behaviour you would see when using ACF._____
###Code
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
# Duplicate plots
# Check out: https://stackoverflow.com/questions/21788593/statsmodels-duplicate-charts
# https://github.com/statsmodels/statsmodels/issues/1265
fig_first = plot_acf(df["Milk First Difference"].dropna())
fig_seasonal_first = plot_acf(df["Seasonal First Difference"].dropna())
###Output
_____no_output_____
###Markdown
Pandas also has this functionality built in, but only for ACF, not PACF. So I recommend using statsmodels, as ACF and PACF is more core to its functionality than it is to pandas' functionality.
###Code
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(df['Seasonal First Difference'].dropna())
###Output
_____no_output_____
###Markdown
Partial AutocorrelationIn general, a partial correlation is a conditional correlation.It is the correlation between two variables under the assumption that we know and take into account the values of some other set of variables.For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2.Formally, this is relationship is defined as: $\frac{\text{Covariance}(y, x_3|x_1, x_2)}{\sqrt{\text{Variance}(y|x_1, x_2)\text{Variance}(x_3| x_1, x_2)}}$Check out this [link](http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4463.htm) for full details on this. We can then plot this relationship:
###Code
result = plot_pacf(df["Seasonal First Difference"].dropna())
###Output
_____no_output_____
###Markdown
InterpretationTypically a sharp drop after lag "k" suggests an AR-k model should be used. If there is a gradual decline, it suggests an MA model. Final Thoughts on Autocorrelation and Partial Autocorrelation* Identification of an AR model is often best done with the PACF. * For an AR model, the theoretical PACF “shuts off” past the order of the model. The phrase “shuts off” means that in theory the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the “order of the model” we mean the most extreme lag of x that is used as a predictor. * Identification of an MA model is often best done with the ACF rather than the PACF. * For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model. _____ Final ACF and PACF PlotsWe've run quite a few plots, so let's just quickly get our "final" ACF and PACF plots. These are the ones we will be referencing in the rest of the notebook below._____
###Code
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df['Seasonal First Difference'].iloc[13:], lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df['Seasonal First Difference'].iloc[13:], lags=40, ax=ax2)
###Output
_____no_output_____
###Markdown
Using the Seasonal ARIMA modelFinally we can use our ARIMA model now that we have an understanding of our data!
###Code
# For non-seasonal data
from statsmodels.tsa.arima_model import ARIMA
# I recommend you glance over this!
#
help(ARIMA)
###Output
Help on class ARIMA in module statsmodels.tsa.arima_model:
class ARIMA(ARMA)
| Autoregressive Integrated Moving Average ARIMA(p,d,q) Model
|
| Parameters
| ----------
| endog : array-like
| The endogenous variable.
| order : iterable
| The (p,d,q) order of the model for the number of AR parameters,
| differences, and MA parameters to use.
| exog : array-like, optional
| An optional array of exogenous variables. This should *not* include a
| constant or trend. You can specify this in the `fit` method.
| dates : array-like of datetime, optional
| An array-like object of datetime objects. If a pandas object is given
| for endog or exog, it is assumed to have a DateIndex.
| freq : str, optional
| The frequency of the time-series. A Pandas offset or 'B', 'D', 'W',
| 'M', 'A', or 'Q'. This is optional if dates are given.
|
|
| Notes
| -----
| If exogenous variables are given, then the model that is fit is
|
| .. math::
|
| \phi(L)(y_t - X_t\beta) = \theta(L)\epsilon_t
|
| where :math:`\phi` and :math:`\theta` are polynomials in the lag
| operator, :math:`L`. This is the regression model with ARMA errors,
| or ARMAX model. This specification is used, whether or not the model
| is fit using conditional sum of square or maximum-likelihood, using
| the `method` argument in
| :meth:`statsmodels.tsa.arima_model.ARIMA.fit`. Therefore, for
| now, `css` and `mle` refer to estimation methods only. This may
| change for the case of the `css` model in future versions.
|
| Method resolution order:
| ARIMA
| ARMA
| statsmodels.tsa.base.tsa_model.TimeSeriesModel
| statsmodels.base.model.LikelihoodModel
| statsmodels.base.model.Model
| builtins.object
|
| Methods defined here:
|
| __getnewargs__(self)
|
| __init__(self, endog, order, exog=None, dates=None, freq=None, missing='none')
| Initialize self. See help(type(self)) for accurate signature.
|
| fit(self, start_params=None, trend='c', method='css-mle', transparams=True, solver='lbfgs', maxiter=50, full_output=1, disp=5, callback=None, start_ar_lags=None, **kwargs)
| Fits ARIMA(p,d,q) model by exact maximum likelihood via Kalman filter.
|
| Parameters
| ----------
| start_params : array-like, optional
| Starting parameters for ARMA(p,q). If None, the default is given
| by ARMA._fit_start_params. See there for more information.
| transparams : bool, optional
| Whehter or not to transform the parameters to ensure stationarity.
| Uses the transformation suggested in Jones (1980). If False,
| no checking for stationarity or invertibility is done.
| method : str {'css-mle','mle','css'}
| This is the loglikelihood to maximize. If "css-mle", the
| conditional sum of squares likelihood is maximized and its values
| are used as starting values for the computation of the exact
| likelihood via the Kalman filter. If "mle", the exact likelihood
| is maximized via the Kalman Filter. If "css" the conditional sum
| of squares likelihood is maximized. All three methods use
| `start_params` as starting parameters. See above for more
| information.
| trend : str {'c','nc'}
| Whether to include a constant or not. 'c' includes constant,
| 'nc' no constant.
| solver : str or None, optional
| Solver to be used. The default is 'lbfgs' (limited memory
| Broyden-Fletcher-Goldfarb-Shanno). Other choices are 'bfgs',
| 'newton' (Newton-Raphson), 'nm' (Nelder-Mead), 'cg' -
| (conjugate gradient), 'ncg' (non-conjugate gradient), and
| 'powell'. By default, the limited memory BFGS uses m=12 to
| approximate the Hessian, projected gradient tolerance of 1e-8 and
| factr = 1e2. You can change these by using kwargs.
| maxiter : int, optional
| The maximum number of function evaluations. Default is 50.
| tol : float
| The convergence tolerance. Default is 1e-08.
| full_output : bool, optional
| If True, all output from solver will be available in
| the Results object's mle_retvals attribute. Output is dependent
| on the solver. See Notes for more information.
| disp : int, optional
| If True, convergence information is printed. For the default
| l_bfgs_b solver, disp controls the frequency of the output during
| the iterations. disp < 0 means no output in this case.
| callback : function, optional
| Called after each iteration as callback(xk) where xk is the current
| parameter vector.
| start_ar_lags : int, optional
| Parameter for fitting start_params. When fitting start_params,
| residuals are obtained from an AR fit, then an ARMA(p,q) model is
| fit via OLS using these residuals. If start_ar_lags is None, fit
| an AR process according to best BIC. If start_ar_lags is not None,
| fits an AR process with a lag length equal to start_ar_lags.
| See ARMA._fit_start_params_hr for more information.
| kwargs
| See Notes for keyword arguments that can be passed to fit.
|
| Returns
| -------
| `statsmodels.tsa.arima.ARIMAResults` class
|
| See also
| --------
| statsmodels.base.model.LikelihoodModel.fit : for more information
| on using the solvers.
| ARIMAResults : results class returned by fit
|
| Notes
| ------
| If fit by 'mle', it is assumed for the Kalman Filter that the initial
| unkown state is zero, and that the inital variance is
| P = dot(inv(identity(m**2)-kron(T,T)),dot(R,R.T).ravel('F')).reshape(r,
| r, order = 'F')
|
| predict(self, params, start=None, end=None, exog=None, typ='linear', dynamic=False)
| ARIMA model in-sample and out-of-sample prediction
|
| Parameters
| ----------
| params : array-like
| The fitted parameters of the model.
| start : int, str, or datetime
| Zero-indexed observation number at which to start forecasting, ie.,
| the first forecast is start. Can also be a date string to
| parse or a datetime type.
| end : int, str, or datetime
| Zero-indexed observation number at which to end forecasting, ie.,
| the first forecast is start. Can also be a date string to
| parse or a datetime type. However, if the dates index does not
| have a fixed frequency, end must be an integer index if you
| want out of sample prediction.
| exog : array-like, optional
| If the model is an ARMAX and out-of-sample forecasting is
| requested, exog must be given. Note that you'll need to pass
| `k_ar` additional lags for any exogenous variables. E.g., if you
| fit an ARMAX(2, q) model and want to predict 5 steps, you need 7
| observations to do this.
| dynamic : bool, optional
| The `dynamic` keyword affects in-sample prediction. If dynamic
| is False, then the in-sample lagged values are used for
| prediction. If `dynamic` is True, then in-sample forecasts are
| used in place of lagged dependent variables. The first forecasted
| value is `start`.
| typ : str {'linear', 'levels'}
|
| - 'linear' : Linear prediction in terms of the differenced
| endogenous variables.
| - 'levels' : Predict the levels of the original endogenous
| variables.
|
|
| Returns
| -------
| predict : array
| The predicted values.
|
|
|
| Notes
| -----
| Use the results predict method instead.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(cls, endog, order, exog=None, dates=None, freq=None, missing='none')
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from ARMA:
|
| geterrors(self, params)
| Get the errors of the ARMA process.
|
| Parameters
| ----------
| params : array-like
| The fitted ARMA parameters
| order : array-like
| 3 item iterable, with the number of AR, MA, and exogenous
| parameters, including the trend
|
| hessian(self, params)
| Compute the Hessian at params,
|
| Notes
| -----
| This is a numerical approximation.
|
| loglike(self, params, set_sigma2=True)
| Compute the log-likelihood for ARMA(p,q) model
|
| Notes
| -----
| Likelihood used depends on the method set in fit
|
| loglike_css(self, params, set_sigma2=True)
| Conditional Sum of Squares likelihood function.
|
| loglike_kalman(self, params, set_sigma2=True)
| Compute exact loglikelihood for ARMA(p,q) model by the Kalman Filter.
|
| score(self, params)
| Compute the score function at params.
|
| Notes
| -----
| This is a numerical approximation.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from statsmodels.tsa.base.tsa_model.TimeSeriesModel:
|
| exog_names
|
| ----------------------------------------------------------------------
| Methods inherited from statsmodels.base.model.LikelihoodModel:
|
| information(self, params)
| Fisher information matrix of model
|
| Returns -Hessian of loglike evaluated at params.
|
| initialize(self)
| Initialize (possibly re-initialize) a Model instance. For
| instance, the design matrix of a linear model may change
| and some things must be recomputed.
|
| ----------------------------------------------------------------------
| Class methods inherited from statsmodels.base.model.Model:
|
| from_formula(formula, data, subset=None, drop_cols=None, *args, **kwargs) from builtins.type
| Create a Model from a formula and dataframe.
|
| Parameters
| ----------
| formula : str or generic Formula object
| The formula specifying the model
| data : array-like
| The data for the model. See Notes.
| subset : array-like
| An array-like object of booleans, integers, or index values that
| indicate the subset of df to use in the model. Assumes df is a
| `pandas.DataFrame`
| drop_cols : array-like
| Columns to drop from the design matrix. Cannot be used to
| drop terms involving categoricals.
| args : extra arguments
| These are passed to the model
| kwargs : extra keyword arguments
| These are passed to the model with one exception. The
| ``eval_env`` keyword is passed to patsy. It can be either a
| :class:`patsy:patsy.EvalEnvironment` object or an integer
| indicating the depth of the namespace to use. For example, the
| default ``eval_env=0`` uses the calling namespace. If you wish
| to use a "clean" environment set ``eval_env=-1``.
|
| Returns
| -------
| model : Model instance
|
| Notes
| ------
| data must define __getitem__ with the keys in the formula terms
| args and kwargs are passed on to the model instantiation. E.g.,
| a numpy structured or rec array, a dictionary, or a pandas DataFrame.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from statsmodels.base.model.Model:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| endog_names
| Names of endogenous variables
###Markdown
p,d,q parameters* p: The number of lag observations included in the model.* d: The number of times that the raw observations are differenced, also called the degree of differencing.* q: The size of the moving average window, also called the order of moving average.
###Code
# We have seasonal data!
model = sm.tsa.statespace.SARIMAX(df['Milk in pounds per cow'],order=(0,1,0), seasonal_order=(1,1,1,12))
results = model.fit()
print(results.summary())
results.resid.plot()
results.resid.plot(kind='kde')
###Output
_____no_output_____
###Markdown
Prediction of Future ValuesFirts we can get an idea of how well our model performs by just predicting for values that we actually already know:
###Code
df['forecast'] = results.predict(start = 150, end= 168, dynamic= True)
df[['Milk in pounds per cow','forecast']].plot(figsize=(12,8))
###Output
_____no_output_____
###Markdown
ForecastingThis requires more time periods, so let's create them with pandas onto our original dataframe!
###Code
df.tail()
# https://pandas.pydata.org/pandas-docs/stable/timeseries.html
# Alternatives
# pd.date_range(df.index[-1],periods=12,freq='M')
from pandas.tseries.offsets import DateOffset
future_dates = [df.index[-1] + DateOffset(months=x) for x in range(0,24) ]
future_dates
future_dates_df = pd.DataFrame(index=future_dates[1:],columns=df.columns)
future_df = pd.concat([df,future_dates_df])
future_df.head()
future_df.tail()
future_df['forecast'] = results.predict(start = 168, end = 188, dynamic= True)
future_df[['Milk in pounds per cow', 'forecast']].plot(figsize=(12, 8))
###Output
_____no_output_____
###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* Warning! This is a complicated topic! Remember that this is an optional notebook to go through and that to fully understand it you should read the supplemental links and watch the full explanatory walkthrough video. This notebook and the video lectures are not meant to be a full comprehensive overview of ARIMA, but instead a walkthrough of what you can use it for, so you can later understand why it may or may not be a good choice for Financial Stock Data.____ ARIMA and Seasonal ARIMA Autoregressive Integrated Moving AveragesThe general process for ARIMA models is the following:* Visualize the Time Series Data* Make the time series data stationary* Plot the Correlation and AutoCorrelation Charts* Construct the ARIMA Model* Use the model to make predictionsLet's go through these steps! Step 1: Get the Data (and format it)We will be using some data about monthly milk production, full details on it can be found [here](https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75!ds=22ox&display=line).Its saved as a csv for you already, let's load it up:
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('monthly-milk-production-pounds-p.csv')
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
** Clean Up**Let's clean this up just a little!
###Code
df.columns = ['Month', 'Milk in pounds per cow']
df.head()
# Weird last value at bottom causing issues
df.drop(168,
axis = 0,
inplace = True)
df['Month'] = pd.to_datetime(df['Month'])
df.head()
df.set_index('Month',inplace=True)
df.head()
df.describe().transpose()
###Output
_____no_output_____
###Markdown
Step 2: Visualize the DataLet's visualize this data with a few methods.
###Code
df.plot()
timeseries = df['Milk in pounds per cow']
timeseries.rolling(12).mean().plot(label='12 Month Rolling Mean')
timeseries.rolling(12).std().plot(label='12 Month Rolling Std')
timeseries.plot()
plt.legend()
timeseries.rolling(12).mean().plot(label = '12 Month Rolling Mean')
timeseries.plot()
plt.legend()
###Output
_____no_output_____
###Markdown
DecompositionETS decomposition allows us to see the individual parts!
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(df['Milk in pounds per cow'],
freq = 12)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
###Output
_____no_output_____
###Markdown
Testing for StationarityWe can use the Augmented [Dickey-Fuller](https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test) [unit root test](https://en.wikipedia.org/wiki/Unit_root_test).In statistics and econometrics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.Basically, we are trying to whether to accept the Null Hypothesis **H0** (that the time series has a unit root, indicating it is non-stationary) or reject **H0** and go with the Alternative Hypothesis (that the time series has no unit root and is stationary).We end up deciding this based on the p-value return.* A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.* A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.Let's run the Augmented Dickey-Fuller test on our data:
###Code
df.head()
from statsmodels.tsa.stattools import adfuller
result = adfuller(df['Milk in pounds per cow'])
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic',
'p-value',
'#Lags Used',
'Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
# Store in a function for later use!
def adf_check(time_series):
"""
Pass in a time series, returns ADF report
"""
result = adfuller(time_series)
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic',
'p-value',
'#Lags Used',
'Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
###Output
_____no_output_____
###Markdown
___________ Important Note!** We have now realized that our data is seasonal (it is also pretty obvious from the plot itself). This means we need to use Seasonal ARIMA on our model. If our data was not seasonal, it means we could use just ARIMA on it. We will take this into account when differencing our data! Typically financial stock data won't be seasonal, but that is kind of the point of this section, to show you common methods, that won't work well on stock finance data!**_____ DifferencingThe first difference of a time series is the series of changes from one period to the next. We can do this easily with pandas. You can continue to take the second difference, third difference, and so on until your data is stationary. ** First Difference **
###Code
df['Milk First Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(1)
adf_check(df['Milk First Difference'].dropna())
df['Milk First Difference'].plot()
###Output
_____no_output_____
###Markdown
** Second Difference **
###Code
# Sometimes it would be necessary to do a second difference
# This is just for show, we didn't need to do a second difference in our case
df['Milk Second Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(1)
adf_check(df['Milk Second Difference'].dropna())
df['Milk Second Difference'].plot()
###Output
_____no_output_____
###Markdown
** Seasonal Difference **
###Code
df['Seasonal Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(12)
df['Seasonal Difference'].plot()
# Seasonal Difference by itself was not enough!
adf_check(df['Seasonal Difference'].dropna())
###Output
Augmented Dickey-Fuller Test:
ADF Test Statistic : -2.33541931436
p-value : 0.160798805277
#Lags Used : 12
Number of Observations Used : 143
weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary
###Markdown
** Seasonal First Difference **
###Code
# You can also do seasonal first difference
df['Seasonal First Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(12)
df['Seasonal First Difference'].plot()
adf_check(df['Seasonal First Difference'].dropna())
###Output
Augmented Dickey-Fuller Test:
ADF Test Statistic : -5.03800227492
p-value : 1.86542343188e-05
#Lags Used : 11
Number of Observations Used : 143
strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary
###Markdown
Autocorrelation and Partial Autocorrelation PlotsAn autocorrelation plot (also known as a [Correlogram](https://en.wikipedia.org/wiki/Correlogram) ) shows the correlation of the series with itself, lagged by x time units. So the y axis is the correlation and the x axis is the number of time units of lag.So imagine taking your time series of length T, copying it, and deleting the first observation of copy 1 and the last observation of copy 2. Now you have two series of length T−1 for which you calculate a correlation coefficient. This is the value of of the vertical axis at x=1x=1 in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags x and this defines the plot.You will run these plots on your differenced/stationary data. There is a lot of great information for identifying and interpreting ACF and PACF [here](http://people.duke.edu/~rnau/arimrule.htm) and [here](https://people.duke.edu/~rnau/411arim3.htm). Autocorrelation InterpretationThe actual interpretation and how it relates to ARIMA models can get a bit complicated, but there are some basic common methods we can use for the ARIMA model. Our main priority here is to try to figure out whether we will use the AR or MA components for the ARIMA model (or both!) as well as how many lags we should use. In general you would use either AR or MA, using both is less common.* If the autocorrelation plot shows positive autocorrelation at the first lag (lag-1), then it suggests to use the AR terms in relation to the lag* If the autocorrelation plot shows negative autocorrelation at the first lag, then it suggests using MA terms. _____ Important Note! Here we will be showing running the ACF and PACF on multiple differenced data sets that have been made stationary in different ways, typically you would just choose a single stationary data set and continue all the way through with that.The reason we use two here is to show you the two typical types of behaviour you would see when using ACF._____
###Code
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
# Duplicate plots
# Check out: https://stackoverflow.com/questions/21788593/statsmodels-duplicate-charts
# https://github.com/statsmodels/statsmodels/issues/1265
fig_first = plot_acf(df["Milk First Difference"].dropna())
fig_seasonal_first = plot_acf(df["Seasonal First Difference"].dropna())
###Output
_____no_output_____
###Markdown
Pandas also has this functionality built in, but only for ACF, not PACF. So I recommend using statsmodels, as ACF and PACF is more core to its functionality than it is to pandas' functionality.
###Code
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(df['Seasonal First Difference'].dropna())
###Output
_____no_output_____
###Markdown
Partial AutocorrelationIn general, a partial correlation is a conditional correlation.It is the correlation between two variables under the assumption that we know and take into account the values of some other set of variables.For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2.Formally, this is relationship is defined as: $\frac{\text{Covariance}(y, x_3|x_1, x_2)}{\sqrt{\text{Variance}(y|x_1, x_2)\text{Variance}(x_3| x_1, x_2)}}$Check out this [link](http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4463.htm) for full details on this. We can then plot this relationship:
###Code
result = plot_pacf(df["Seasonal First Difference"].dropna())
###Output
_____no_output_____
###Markdown
InterpretationTypically a sharp drop after lag "k" suggests an AR-k model should be used. If there is a gradual decline, it suggests an MA model. Final Thoughts on Autocorrelation and Partial Autocorrelation* Identification of an AR model is often best done with the PACF. * For an AR model, the theoretical PACF “shuts off” past the order of the model. The phrase “shuts off” means that in theory the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the “order of the model” we mean the most extreme lag of x that is used as a predictor. * Identification of an MA model is often best done with the ACF rather than the PACF. * For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model. _____ Final ACF and PACF PlotsWe've run quite a few plots, so let's just quickly get our "final" ACF and PACF plots. These are the ones we will be referencing in the rest of the notebook below._____
###Code
fig = plt.figure(figsize = (12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df['Seasonal First Difference'].iloc[13:],
lags = 40,
ax = ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df['Seasonal First Difference'].iloc[13:],
lags = 40,
ax = ax2)
###Output
_____no_output_____
###Markdown
Using the Seasonal ARIMA modelFinally we can use our ARIMA model now that we have an understanding of our data!
###Code
# For non-seasonal data
from statsmodels.tsa.arima_model import ARIMA
# I recommend you glance over this!
#
help(ARIMA)
###Output
Help on class ARIMA in module statsmodels.tsa.arima_model:
class ARIMA(ARMA)
| Autoregressive Integrated Moving Average ARIMA(p,d,q) Model
|
| Parameters
| ----------
| endog : array-like
| The endogenous variable.
| order : iterable
| The (p,d,q) order of the model for the number of AR parameters,
| differences, and MA parameters to use.
| exog : array-like, optional
| An optional array of exogenous variables. This should *not* include a
| constant or trend. You can specify this in the `fit` method.
| dates : array-like of datetime, optional
| An array-like object of datetime objects. If a pandas object is given
| for endog or exog, it is assumed to have a DateIndex.
| freq : str, optional
| The frequency of the time-series. A Pandas offset or 'B', 'D', 'W',
| 'M', 'A', or 'Q'. This is optional if dates are given.
|
|
| Notes
| -----
| If exogenous variables are given, then the model that is fit is
|
| .. math::
|
| \phi(L)(y_t - X_t\beta) = \theta(L)\epsilon_t
|
| where :math:`\phi` and :math:`\theta` are polynomials in the lag
| operator, :math:`L`. This is the regression model with ARMA errors,
| or ARMAX model. This specification is used, whether or not the model
| is fit using conditional sum of square or maximum-likelihood, using
| the `method` argument in
| :meth:`statsmodels.tsa.arima_model.ARIMA.fit`. Therefore, for
| now, `css` and `mle` refer to estimation methods only. This may
| change for the case of the `css` model in future versions.
|
| Method resolution order:
| ARIMA
| ARMA
| statsmodels.tsa.base.tsa_model.TimeSeriesModel
| statsmodels.base.model.LikelihoodModel
| statsmodels.base.model.Model
| builtins.object
|
| Methods defined here:
|
| __getnewargs__(self)
|
| __init__(self, endog, order, exog=None, dates=None, freq=None, missing='none')
| Initialize self. See help(type(self)) for accurate signature.
|
| fit(self, start_params=None, trend='c', method='css-mle', transparams=True, solver='lbfgs', maxiter=50, full_output=1, disp=5, callback=None, start_ar_lags=None, **kwargs)
| Fits ARIMA(p,d,q) model by exact maximum likelihood via Kalman filter.
|
| Parameters
| ----------
| start_params : array-like, optional
| Starting parameters for ARMA(p,q). If None, the default is given
| by ARMA._fit_start_params. See there for more information.
| transparams : bool, optional
| Whehter or not to transform the parameters to ensure stationarity.
| Uses the transformation suggested in Jones (1980). If False,
| no checking for stationarity or invertibility is done.
| method : str {'css-mle','mle','css'}
| This is the loglikelihood to maximize. If "css-mle", the
| conditional sum of squares likelihood is maximized and its values
| are used as starting values for the computation of the exact
| likelihood via the Kalman filter. If "mle", the exact likelihood
| is maximized via the Kalman Filter. If "css" the conditional sum
| of squares likelihood is maximized. All three methods use
| `start_params` as starting parameters. See above for more
| information.
| trend : str {'c','nc'}
| Whether to include a constant or not. 'c' includes constant,
| 'nc' no constant.
| solver : str or None, optional
| Solver to be used. The default is 'lbfgs' (limited memory
| Broyden-Fletcher-Goldfarb-Shanno). Other choices are 'bfgs',
| 'newton' (Newton-Raphson), 'nm' (Nelder-Mead), 'cg' -
| (conjugate gradient), 'ncg' (non-conjugate gradient), and
| 'powell'. By default, the limited memory BFGS uses m=12 to
| approximate the Hessian, projected gradient tolerance of 1e-8 and
| factr = 1e2. You can change these by using kwargs.
| maxiter : int, optional
| The maximum number of function evaluations. Default is 50.
| tol : float
| The convergence tolerance. Default is 1e-08.
| full_output : bool, optional
| If True, all output from solver will be available in
| the Results object's mle_retvals attribute. Output is dependent
| on the solver. See Notes for more information.
| disp : int, optional
| If True, convergence information is printed. For the default
| l_bfgs_b solver, disp controls the frequency of the output during
| the iterations. disp < 0 means no output in this case.
| callback : function, optional
| Called after each iteration as callback(xk) where xk is the current
| parameter vector.
| start_ar_lags : int, optional
| Parameter for fitting start_params. When fitting start_params,
| residuals are obtained from an AR fit, then an ARMA(p,q) model is
| fit via OLS using these residuals. If start_ar_lags is None, fit
| an AR process according to best BIC. If start_ar_lags is not None,
| fits an AR process with a lag length equal to start_ar_lags.
| See ARMA._fit_start_params_hr for more information.
| kwargs
| See Notes for keyword arguments that can be passed to fit.
|
| Returns
| -------
| `statsmodels.tsa.arima.ARIMAResults` class
|
| See also
| --------
| statsmodels.base.model.LikelihoodModel.fit : for more information
| on using the solvers.
| ARIMAResults : results class returned by fit
|
| Notes
| ------
| If fit by 'mle', it is assumed for the Kalman Filter that the initial
| unkown state is zero, and that the inital variance is
| P = dot(inv(identity(m**2)-kron(T,T)),dot(R,R.T).ravel('F')).reshape(r,
| r, order = 'F')
|
| predict(self, params, start=None, end=None, exog=None, typ='linear', dynamic=False)
| ARIMA model in-sample and out-of-sample prediction
|
| Parameters
| ----------
| params : array-like
| The fitted parameters of the model.
| start : int, str, or datetime
| Zero-indexed observation number at which to start forecasting, ie.,
| the first forecast is start. Can also be a date string to
| parse or a datetime type.
| end : int, str, or datetime
| Zero-indexed observation number at which to end forecasting, ie.,
| the first forecast is start. Can also be a date string to
| parse or a datetime type. However, if the dates index does not
| have a fixed frequency, end must be an integer index if you
| want out of sample prediction.
| exog : array-like, optional
| If the model is an ARMAX and out-of-sample forecasting is
| requested, exog must be given. Note that you'll need to pass
| `k_ar` additional lags for any exogenous variables. E.g., if you
| fit an ARMAX(2, q) model and want to predict 5 steps, you need 7
| observations to do this.
| dynamic : bool, optional
| The `dynamic` keyword affects in-sample prediction. If dynamic
| is False, then the in-sample lagged values are used for
| prediction. If `dynamic` is True, then in-sample forecasts are
| used in place of lagged dependent variables. The first forecasted
| value is `start`.
| typ : str {'linear', 'levels'}
|
| - 'linear' : Linear prediction in terms of the differenced
| endogenous variables.
| - 'levels' : Predict the levels of the original endogenous
| variables.
|
|
| Returns
| -------
| predict : array
| The predicted values.
|
|
|
| Notes
| -----
| Use the results predict method instead.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(cls, endog, order, exog=None, dates=None, freq=None, missing='none')
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from ARMA:
|
| geterrors(self, params)
| Get the errors of the ARMA process.
|
| Parameters
| ----------
| params : array-like
| The fitted ARMA parameters
| order : array-like
| 3 item iterable, with the number of AR, MA, and exogenous
| parameters, including the trend
|
| hessian(self, params)
| Compute the Hessian at params,
|
| Notes
| -----
| This is a numerical approximation.
|
| loglike(self, params, set_sigma2=True)
| Compute the log-likelihood for ARMA(p,q) model
|
| Notes
| -----
| Likelihood used depends on the method set in fit
|
| loglike_css(self, params, set_sigma2=True)
| Conditional Sum of Squares likelihood function.
|
| loglike_kalman(self, params, set_sigma2=True)
| Compute exact loglikelihood for ARMA(p,q) model by the Kalman Filter.
|
| score(self, params)
| Compute the score function at params.
|
| Notes
| -----
| This is a numerical approximation.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from statsmodels.tsa.base.tsa_model.TimeSeriesModel:
|
| exog_names
|
| ----------------------------------------------------------------------
| Methods inherited from statsmodels.base.model.LikelihoodModel:
|
| information(self, params)
| Fisher information matrix of model
|
| Returns -Hessian of loglike evaluated at params.
|
| initialize(self)
| Initialize (possibly re-initialize) a Model instance. For
| instance, the design matrix of a linear model may change
| and some things must be recomputed.
|
| ----------------------------------------------------------------------
| Class methods inherited from statsmodels.base.model.Model:
|
| from_formula(formula, data, subset=None, drop_cols=None, *args, **kwargs) from builtins.type
| Create a Model from a formula and dataframe.
|
| Parameters
| ----------
| formula : str or generic Formula object
| The formula specifying the model
| data : array-like
| The data for the model. See Notes.
| subset : array-like
| An array-like object of booleans, integers, or index values that
| indicate the subset of df to use in the model. Assumes df is a
| `pandas.DataFrame`
| drop_cols : array-like
| Columns to drop from the design matrix. Cannot be used to
| drop terms involving categoricals.
| args : extra arguments
| These are passed to the model
| kwargs : extra keyword arguments
| These are passed to the model with one exception. The
| ``eval_env`` keyword is passed to patsy. It can be either a
| :class:`patsy:patsy.EvalEnvironment` object or an integer
| indicating the depth of the namespace to use. For example, the
| default ``eval_env=0`` uses the calling namespace. If you wish
| to use a "clean" environment set ``eval_env=-1``.
|
| Returns
| -------
| model : Model instance
|
| Notes
| ------
| data must define __getitem__ with the keys in the formula terms
| args and kwargs are passed on to the model instantiation. E.g.,
| a numpy structured or rec array, a dictionary, or a pandas DataFrame.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from statsmodels.base.model.Model:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| endog_names
| Names of endogenous variables
###Markdown
p,d,q parameters* p: The number of lag observations included in the model.* d: The number of times that the raw observations are differenced, also called the degree of differencing.* q: The size of the moving average window, also called the order of moving average.
###Code
# We have seasonal data!
model = sm.tsa.statespace.SARIMAX(df['Milk in pounds per cow'],
order = (0,1,0),
seasonal_order = (1,1,1,12))
results = model.fit()
print(results.summary())
results.resid.plot()
results.resid.plot(kind = 'kde')
###Output
_____no_output_____
###Markdown
Prediction of Future ValuesFirts we can get an idea of how well our model performs by just predicting for values that we actually already know:
###Code
df['forecast'] = results.predict(start = 150,
end = 168,
dynamic = True)
df[['Milk in pounds per cow','forecast']].plot(figsize = (12, 8))
###Output
_____no_output_____
###Markdown
ForecastingThis requires more time periods, so let's create them with pandas onto our original dataframe!
###Code
df.tail()
# https://pandas.pydata.org/pandas-docs/stable/timeseries.html
# Alternatives
# pd.date_range(df.index[-1],periods=12,freq='M')
from pandas.tseries.offsets import DateOffset
future_dates = [df.index[-1] + DateOffset(months = x) for x in range(0,24) ]
future_dates
future_dates_df = pd.DataFrame(index = future_dates[1:],
columns = df.columns)
future_df = pd.concat([df,future_dates_df])
future_df.head()
future_df.tail()
future_df['forecast'] = results.predict(start = 168,
end = 188,
dynamic= True)
future_df[['Milk in pounds per cow', 'forecast']].plot(figsize = (12, 8))
###Output
_____no_output_____
###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* Warning! This is a complicated topic! Remember that this is an optional notebook to go through and that to fully understand it you should read the supplemental links and watch the full explanatory walkthrough video. This notebook and the video lectures are not meant to be a full comprehensive overview of ARIMA, but instead a walkthrough of what you can use it for, so you can later understand why it may or may not be a good choice for Financial Stock Data.____ ARIMA and Seasonal ARIMA Autoregressive Integrated Moving AveragesThe general process for ARIMA models is the following:* Visualize the Time Series Data* Make the time series data stationary* Plot the Correlation and AutoCorrelation Charts* Construct the ARIMA Model* Use the model to make predictionsLet's go through these steps! Step 1: Get the Data (and format it)We will be using some data about monthly milk production, full details on it can be found [here](https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75!ds=22ox&display=line).Its saved as a csv for you already, let's load it up:
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('monthly-milk-production-pounds-p.csv')
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
** Clean Up**Let's clean this up just a little!
###Code
df.columns = ['Month','Milk in pounds per cow']
df.head()
# Weird last value at bottom causing issues
df.drop(168,axis=0,inplace=True)
# trasformo la colonna 'Month' in un oggetto date
df['Month'] = pd.to_datetime(df['Month'])
df.head()
df.set_index('Month',inplace=True)
df.head()
df.describe().transpose()
###Output
_____no_output_____
###Markdown
Step 2: Visualize the DataLet's visualize this data with a few methods.
###Code
df.plot()
timeseries = df['Milk in pounds per cow']
timeseries.rolling(12).mean().plot(label='12 Month Rolling Mean')
timeseries.rolling(12).std().plot(label='12 Month Rolling Std')
timeseries.plot()
plt.legend()
timeseries.rolling(12).mean().plot(label='12 Month Rolling Mean')
timeseries.plot()
plt.legend()
###Output
_____no_output_____
###Markdown
DecompositionETS decomposition allows us to see the individual parts!
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(df['Milk in pounds per cow'], freq=12)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
# residual è l'errore che non è spiegabile ne da seasonal ne da trend
###Output
<ipython-input-17-6657136243e8>:2: FutureWarning: the 'freq'' keyword is deprecated, use 'period' instead
decomposition = seasonal_decompose(df['Milk in pounds per cow'], freq=12)
###Markdown
Testing for StationarityWe can use the Augmented [Dickey-Fuller](https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test) [unit root test](https://en.wikipedia.org/wiki/Unit_root_test).**La null-hypotesis dice che è una non-stationary time-series** Nota:Stazionario significa che è non cresce e rimane stabile.In statistics and econometrics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.Basically, we are trying to whether to accept the Null Hypothesis **H0** (that the time series has a unit root, indicating it is non-stationary) or reject **H0** and go with the Alternative Hypothesis (that the time series has no unit root and is stationary).We end up deciding this based on the p-value return.* A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.* A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.Let's run the Augmented Dickey-Fuller test on our data:
###Code
df.head()
from statsmodels.tsa.stattools import adfuller
result = adfuller(df['Milk in pounds per cow'])
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
# Store in a function for later use!
def adf_check(time_series):
"""
Pass in a time series, returns ADF report
"""
result = adfuller(time_series)
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
###Output
_____no_output_____
###Markdown
___________ Important Note!** We have now realized that our data is seasonal (it is also pretty obvious from the plot itself). This means we need to use Seasonal ARIMA on our model. If our data was not seasonal, it means we could use just ARIMA on it. We will take this into account when differencing our data! Typically financial stock data won't be seasonal, but that is kind of the point of this section, to show you common methods, that won't work well on stock finance data!**_____ DifferencingThe first difference of a time series is the series of changes from one period to the next. We can do this easily with pandas. You can continue to take the second difference, third difference, and so on until your data is stationary. ** First Difference **
###Code
df['Milk First Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(1)
adf_check(df['Milk First Difference'].dropna())
df['Milk First Difference'].plot()
###Output
_____no_output_____
###Markdown
** Second Difference **
###Code
# Sometimes it would be necessary to do a second difference
# This is just for show, we didn't need to do a second difference in our case
df['Milk Second Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(1)
adf_check(df['Milk Second Difference'].dropna())
df['Milk Second Difference'].plot()
###Output
_____no_output_____
###Markdown
** Seasonal Difference **
###Code
df['Seasonal Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(12)
df['Seasonal Difference'].plot()
# Seasonal Difference by itself was not enough!
adf_check(df['Seasonal Difference'].dropna())
###Output
Augmented Dickey-Fuller Test:
ADF Test Statistic : -2.33541931436
p-value : 0.160798805277
#Lags Used : 12
Number of Observations Used : 143
weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary
###Markdown
** Seasonal First Difference **
###Code
# You can also do seasonal first difference
df['Seasonal First Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(12)
df['Seasonal First Difference'].plot()
adf_check(df['Seasonal First Difference'].dropna())
###Output
Augmented Dickey-Fuller Test:
ADF Test Statistic : -5.03800227492
p-value : 1.86542343188e-05
#Lags Used : 11
Number of Observations Used : 143
strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary
###Markdown
Autocorrelation and Partial Autocorrelation PlotsAn autocorrelation plot (also known as a [Correlogram](https://en.wikipedia.org/wiki/Correlogram) ) shows the correlation of the series with itself, lagged by x time units. So the y axis is the correlation and the x axis is the number of time units of lag.So imagine taking your time series of length T, copying it, and deleting the first observation of copy 1 and the last observation of copy 2. Now you have two series of length T−1 for which you calculate a correlation coefficient. This is the value of of the vertical axis at x=1x=1 in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags x and this defines the plot.You will run these plots on your differenced/stationary data. There is a lot of great information for identifying and interpreting ACF and PACF [here](http://people.duke.edu/~rnau/arimrule.htm) and [here](https://people.duke.edu/~rnau/411arim3.htm). Autocorrelation InterpretationThe actual interpretation and how it relates to ARIMA models can get a bit complicated, but there are some basic common methods we can use for the ARIMA model. Our main priority here is to try to figure out whether we will use the AR or MA components for the ARIMA model (or both!) as well as how many lags we should use. In general you would use either AR or MA, using both is less common.* If the autocorrelation plot shows positive autocorrelation at the first lag (lag-1), then it suggests to use the AR terms in relation to the lag* If the autocorrelation plot shows negative autocorrelation at the first lag, then it suggests using MA terms. _____ Important Note! Here we will be showing running the ACF and PACF on multiple differenced data sets that have been made stationary in different ways, typically you would just choose a single stationary data set and continue all the way through with that.The reason we use two here is to show you the two typical types of behaviour you would see when using ACF._____
###Code
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
# Duplicate plots
# Check out: https://stackoverflow.com/questions/21788593/statsmodels-duplicate-charts
# https://github.com/statsmodels/statsmodels/issues/1265
fig_first = plot_acf(df["Milk First Difference"].dropna())
fig_seasonal_first = plot_acf(df["Seasonal First Difference"].dropna())
###Output
_____no_output_____
###Markdown
Pandas also has this functionality built in, but only for ACF, not PACF. So I recommend using statsmodels, as ACF and PACF is more core to its functionality than it is to pandas' functionality.
###Code
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(df['Seasonal First Difference'].dropna())
###Output
_____no_output_____
###Markdown
Partial AutocorrelationIn general, a partial correlation is a conditional correlation.It is the correlation between two variables under the assumption that we know and take into account the values of some other set of variables.For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2.Formally, this is relationship is defined as: $\frac{\text{Covariance}(y, x_3|x_1, x_2)}{\sqrt{\text{Variance}(y|x_1, x_2)\text{Variance}(x_3| x_1, x_2)}}$Check out this [link](http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4463.htm) for full details on this. We can then plot this relationship:
###Code
result = plot_pacf(df["Seasonal First Difference"].dropna())
###Output
_____no_output_____
###Markdown
InterpretationTypically a sharp drop after lag "k" suggests an AR-k model should be used. If there is a gradual decline, it suggests an MA model. Final Thoughts on Autocorrelation and Partial Autocorrelation* Identification of an AR model is often best done with the PACF. * For an AR model, the theoretical PACF “shuts off” past the order of the model. The phrase “shuts off” means that in theory the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the “order of the model” we mean the most extreme lag of x that is used as a predictor. * Identification of an MA model is often best done with the ACF rather than the PACF. * For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model. _____ Final ACF and PACF PlotsWe've run quite a few plots, so let's just quickly get our "final" ACF and PACF plots. These are the ones we will be referencing in the rest of the notebook below._____
###Code
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df['Seasonal First Difference'].iloc[13:], lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df['Seasonal First Difference'].iloc[13:], lags=40, ax=ax2)
###Output
_____no_output_____
###Markdown
Using the Seasonal ARIMA modelFinally we can use our ARIMA model now that we have an understanding of our data!
###Code
# For non-seasonal data
from statsmodels.tsa.arima_model import ARIMA
# I recommend you glance over this!
#
help(ARIMA)
###Output
Help on class ARIMA in module statsmodels.tsa.arima_model:
class ARIMA(ARMA)
| Autoregressive Integrated Moving Average ARIMA(p,d,q) Model
|
| Parameters
| ----------
| endog : array-like
| The endogenous variable.
| order : iterable
| The (p,d,q) order of the model for the number of AR parameters,
| differences, and MA parameters to use.
| exog : array-like, optional
| An optional array of exogenous variables. This should *not* include a
| constant or trend. You can specify this in the `fit` method.
| dates : array-like of datetime, optional
| An array-like object of datetime objects. If a pandas object is given
| for endog or exog, it is assumed to have a DateIndex.
| freq : str, optional
| The frequency of the time-series. A Pandas offset or 'B', 'D', 'W',
| 'M', 'A', or 'Q'. This is optional if dates are given.
|
|
| Notes
| -----
| If exogenous variables are given, then the model that is fit is
|
| .. math::
|
| \phi(L)(y_t - X_t\beta) = \theta(L)\epsilon_t
|
| where :math:`\phi` and :math:`\theta` are polynomials in the lag
| operator, :math:`L`. This is the regression model with ARMA errors,
| or ARMAX model. This specification is used, whether or not the model
| is fit using conditional sum of square or maximum-likelihood, using
| the `method` argument in
| :meth:`statsmodels.tsa.arima_model.ARIMA.fit`. Therefore, for
| now, `css` and `mle` refer to estimation methods only. This may
| change for the case of the `css` model in future versions.
|
| Method resolution order:
| ARIMA
| ARMA
| statsmodels.tsa.base.tsa_model.TimeSeriesModel
| statsmodels.base.model.LikelihoodModel
| statsmodels.base.model.Model
| builtins.object
|
| Methods defined here:
|
| __getnewargs__(self)
|
| __init__(self, endog, order, exog=None, dates=None, freq=None, missing='none')
| Initialize self. See help(type(self)) for accurate signature.
|
| fit(self, start_params=None, trend='c', method='css-mle', transparams=True, solver='lbfgs', maxiter=50, full_output=1, disp=5, callback=None, start_ar_lags=None, **kwargs)
| Fits ARIMA(p,d,q) model by exact maximum likelihood via Kalman filter.
|
| Parameters
| ----------
| start_params : array-like, optional
| Starting parameters for ARMA(p,q). If None, the default is given
| by ARMA._fit_start_params. See there for more information.
| transparams : bool, optional
| Whehter or not to transform the parameters to ensure stationarity.
| Uses the transformation suggested in Jones (1980). If False,
| no checking for stationarity or invertibility is done.
| method : str {'css-mle','mle','css'}
| This is the loglikelihood to maximize. If "css-mle", the
| conditional sum of squares likelihood is maximized and its values
| are used as starting values for the computation of the exact
| likelihood via the Kalman filter. If "mle", the exact likelihood
| is maximized via the Kalman Filter. If "css" the conditional sum
| of squares likelihood is maximized. All three methods use
| `start_params` as starting parameters. See above for more
| information.
| trend : str {'c','nc'}
| Whether to include a constant or not. 'c' includes constant,
| 'nc' no constant.
| solver : str or None, optional
| Solver to be used. The default is 'lbfgs' (limited memory
| Broyden-Fletcher-Goldfarb-Shanno). Other choices are 'bfgs',
| 'newton' (Newton-Raphson), 'nm' (Nelder-Mead), 'cg' -
| (conjugate gradient), 'ncg' (non-conjugate gradient), and
| 'powell'. By default, the limited memory BFGS uses m=12 to
| approximate the Hessian, projected gradient tolerance of 1e-8 and
| factr = 1e2. You can change these by using kwargs.
| maxiter : int, optional
| The maximum number of function evaluations. Default is 50.
| tol : float
| The convergence tolerance. Default is 1e-08.
| full_output : bool, optional
| If True, all output from solver will be available in
| the Results object's mle_retvals attribute. Output is dependent
| on the solver. See Notes for more information.
| disp : int, optional
| If True, convergence information is printed. For the default
| l_bfgs_b solver, disp controls the frequency of the output during
| the iterations. disp < 0 means no output in this case.
| callback : function, optional
| Called after each iteration as callback(xk) where xk is the current
| parameter vector.
| start_ar_lags : int, optional
| Parameter for fitting start_params. When fitting start_params,
| residuals are obtained from an AR fit, then an ARMA(p,q) model is
| fit via OLS using these residuals. If start_ar_lags is None, fit
| an AR process according to best BIC. If start_ar_lags is not None,
| fits an AR process with a lag length equal to start_ar_lags.
| See ARMA._fit_start_params_hr for more information.
| kwargs
| See Notes for keyword arguments that can be passed to fit.
|
| Returns
| -------
| `statsmodels.tsa.arima.ARIMAResults` class
|
| See also
| --------
| statsmodels.base.model.LikelihoodModel.fit : for more information
| on using the solvers.
| ARIMAResults : results class returned by fit
|
| Notes
| ------
| If fit by 'mle', it is assumed for the Kalman Filter that the initial
| unkown state is zero, and that the inital variance is
| P = dot(inv(identity(m**2)-kron(T,T)),dot(R,R.T).ravel('F')).reshape(r,
| r, order = 'F')
|
| predict(self, params, start=None, end=None, exog=None, typ='linear', dynamic=False)
| ARIMA model in-sample and out-of-sample prediction
|
| Parameters
| ----------
| params : array-like
| The fitted parameters of the model.
| start : int, str, or datetime
| Zero-indexed observation number at which to start forecasting, ie.,
| the first forecast is start. Can also be a date string to
| parse or a datetime type.
| end : int, str, or datetime
| Zero-indexed observation number at which to end forecasting, ie.,
| the first forecast is start. Can also be a date string to
| parse or a datetime type. However, if the dates index does not
| have a fixed frequency, end must be an integer index if you
| want out of sample prediction.
| exog : array-like, optional
| If the model is an ARMAX and out-of-sample forecasting is
| requested, exog must be given. Note that you'll need to pass
| `k_ar` additional lags for any exogenous variables. E.g., if you
| fit an ARMAX(2, q) model and want to predict 5 steps, you need 7
| observations to do this.
| dynamic : bool, optional
| The `dynamic` keyword affects in-sample prediction. If dynamic
| is False, then the in-sample lagged values are used for
| prediction. If `dynamic` is True, then in-sample forecasts are
| used in place of lagged dependent variables. The first forecasted
| value is `start`.
| typ : str {'linear', 'levels'}
|
| - 'linear' : Linear prediction in terms of the differenced
| endogenous variables.
| - 'levels' : Predict the levels of the original endogenous
| variables.
|
|
| Returns
| -------
| predict : array
| The predicted values.
|
|
|
| Notes
| -----
| Use the results predict method instead.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| __new__(cls, endog, order, exog=None, dates=None, freq=None, missing='none')
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from ARMA:
|
| geterrors(self, params)
| Get the errors of the ARMA process.
|
| Parameters
| ----------
| params : array-like
| The fitted ARMA parameters
| order : array-like
| 3 item iterable, with the number of AR, MA, and exogenous
| parameters, including the trend
|
| hessian(self, params)
| Compute the Hessian at params,
|
| Notes
| -----
| This is a numerical approximation.
|
| loglike(self, params, set_sigma2=True)
| Compute the log-likelihood for ARMA(p,q) model
|
| Notes
| -----
| Likelihood used depends on the method set in fit
|
| loglike_css(self, params, set_sigma2=True)
| Conditional Sum of Squares likelihood function.
|
| loglike_kalman(self, params, set_sigma2=True)
| Compute exact loglikelihood for ARMA(p,q) model by the Kalman Filter.
|
| score(self, params)
| Compute the score function at params.
|
| Notes
| -----
| This is a numerical approximation.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from statsmodels.tsa.base.tsa_model.TimeSeriesModel:
|
| exog_names
|
| ----------------------------------------------------------------------
| Methods inherited from statsmodels.base.model.LikelihoodModel:
|
| information(self, params)
| Fisher information matrix of model
|
| Returns -Hessian of loglike evaluated at params.
|
| initialize(self)
| Initialize (possibly re-initialize) a Model instance. For
| instance, the design matrix of a linear model may change
| and some things must be recomputed.
|
| ----------------------------------------------------------------------
| Class methods inherited from statsmodels.base.model.Model:
|
| from_formula(formula, data, subset=None, drop_cols=None, *args, **kwargs) from builtins.type
| Create a Model from a formula and dataframe.
|
| Parameters
| ----------
| formula : str or generic Formula object
| The formula specifying the model
| data : array-like
| The data for the model. See Notes.
| subset : array-like
| An array-like object of booleans, integers, or index values that
| indicate the subset of df to use in the model. Assumes df is a
| `pandas.DataFrame`
| drop_cols : array-like
| Columns to drop from the design matrix. Cannot be used to
| drop terms involving categoricals.
| args : extra arguments
| These are passed to the model
| kwargs : extra keyword arguments
| These are passed to the model with one exception. The
| ``eval_env`` keyword is passed to patsy. It can be either a
| :class:`patsy:patsy.EvalEnvironment` object or an integer
| indicating the depth of the namespace to use. For example, the
| default ``eval_env=0`` uses the calling namespace. If you wish
| to use a "clean" environment set ``eval_env=-1``.
|
| Returns
| -------
| model : Model instance
|
| Notes
| ------
| data must define __getitem__ with the keys in the formula terms
| args and kwargs are passed on to the model instantiation. E.g.,
| a numpy structured or rec array, a dictionary, or a pandas DataFrame.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from statsmodels.base.model.Model:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| endog_names
| Names of endogenous variables
###Markdown
p,d,q parameters* p: The number of lag observations included in the model.* d: The number of times that the raw observations are differenced, also called the degree of differencing.* q: The size of the moving average window, also called the order of moving average.
###Code
# We have seasonal data!
model = sm.tsa.statespace.SARIMAX(df['Milk in pounds per cow'],order=(0,1,0), seasonal_order=(1,1,1,12))
results = model.fit()
print(results.summary())
results.resid.plot()
results.resid.plot(kind='kde') # errore del modello
###Output
_____no_output_____
###Markdown
Prediction of Future ValuesFirts we can get an idea of how well our model performs by just predicting for values that we actually already know:
###Code
df['forecast'] = results.predict(start = 150, end= 168, dynamic= True)
df[['Milk in pounds per cow','forecast']].plot(figsize=(12,8))
###Output
_____no_output_____
###Markdown
ForecastingThis requires more time periods, so let's create them with pandas onto our original dataframe!
###Code
df.tail()
# https://pandas.pydata.org/pandas-docs/stable/timeseries.html
# Alternatives
# pd.date_range(df.index[-1],periods=12,freq='M')
from pandas.tseries.offsets import DateOffset
# df.index[-1] prendo l'ultima data disponibile
# DateOffset(months=x) for x in range(1,24) aggiungo 24 date dei mesi successivi a quello di partenza
future_dates = [df.index[-1] + DateOffset(months=x) for x in range(1,24) ]
future_dates
future_dates_df = pd.DataFrame(index=future_dates[1:],columns=df.columns)
future_df = pd.concat([df,future_dates_df])
future_df.head()
future_df.tail()
future_df['forecast'] = results.predict(start = 168, end = 188, dynamic= True)
future_df[['Milk in pounds per cow', 'forecast']].plot(figsize=(12, 8))
###Output
_____no_output_____ |
ipython/Tools/Plot xyz.ipynb | ###Markdown
ARC Tools Plot 3D coordinates input parameters:
###Code
xyz = """
C 1.00749012 0.07872046 0.03791885
C 0.48062281 -1.18732352 0.62821530
C -0.81593564 -1.48451129 0.76677988
H 0.67999858 0.19592331 -1.00028033
H 0.67999730 0.94931172 0.61555547
H 2.10239374 0.06632262 0.04370027
H 1.21076637 -1.91583774 0.96788621
"""
from arc.plotter import draw_3d, show_sticks
from arc.species.converter import check_xyz_dict, molecules_from_xyz
from arc.species.converter import xyz_to_pybel_mol, pybel_to_inchi
from IPython.display import display
xyz = check_xyz_dict(xyz)
# Analyze graph
s_mol, b_mol = molecules_from_xyz(xyz)
mol = b_mol or s_mol
print(mol.to_smiles())
print('\n')
print(pybel_to_inchi(xyz_to_pybel_mol(xyz)))
print('\n')
print(mol.to_adjacency_list())
print('\n')
display(mol)
success = show_sticks(xyz=xyz)
draw_3d(xyz=xyz)
###Output
_____no_output_____
###Markdown
ARC Tools Plot 3D coordinates input parameters:
###Code
xyz = """
O 1.17464110 -0.15309781 0.00000000
N 0.06304988 0.35149648 0.00000000
C -1.12708952 -0.11333971 0.00000000
H -1.93800144 0.60171738 0.00000000
H -1.29769464 -1.18742971 0.00000000
"""
from arc.plotter import draw_3d, show_sticks, plot_3d_mol_as_scatter
from arc.species.converter import check_xyz_dict, molecules_from_xyz
from arc.species.converter import xyz_to_pybel_mol, pybel_to_inchi
from IPython.display import display
%matplotlib inline
xyz = check_xyz_dict(xyz)
# Analyze graph
s_mol, b_mol = molecules_from_xyz(xyz)
mol = b_mol or s_mol
print(mol.to_smiles())
print('\n')
print(pybel_to_inchi(xyz_to_pybel_mol(xyz)))
print('\n')
print(mol.to_adjacency_list())
print('\n')
display(mol)
show_sticks(xyz)
draw_3d(xyz)
plot_3d_mol_as_scatter(xyz)
###Output
_____no_output_____
###Markdown
ARC Tools Plot 3D coordinates input parameters:
###Code
xyz = """
O 1.17464110 -0.15309781 0.00000000
N 0.06304988 0.35149648 0.00000000
C -1.12708952 -0.11333971 0.00000000
H -1.93800144 0.60171738 0.00000000
H -1.29769464 -1.18742971 0.00000000
"""
from arc.plotter import show_sticks, plot_3d_mol_as_scatter
from arc.species.converter import check_xyz_dict, molecules_from_xyz
from arc.species.converter import xyz_to_pybel_mol, pybel_to_inchi
from IPython.display import display
%matplotlib inline
xyz = '/home/alongd/Code/ARC/ipython/Tools/ts1016.log'
xyz = check_xyz_dict(xyz)
path = '/home/alongd/Code/runs/T3/35/iteration_0/ARC/calcs/Species/C2H4_0/freq_a6950/output.out'
show_sticks(xyz)
# Analyze graph
s_mol, b_mol = molecules_from_xyz(xyz)
mol = b_mol or s_mol
print(mol.to_smiles())
print('\n')
print(pybel_to_inchi(xyz_to_pybel_mol(xyz)))
print('\n')
print(mol.to_adjacency_list())
print('\n')
display(mol)
show_sticks(xyz)
plot_3d_mol_as_scatter(xyz)
###Output
WARNING:pint.util:Could not resolve planks_constant: UndefinedUnitError()
WARNING:pint.util:Could not resolve plank_constant: UndefinedUnitError()
###Markdown
ARC Tools Plot 3D coordinates
###Code
xyz = """
C 1.00749012 0.07872046 0.03791885
C 0.48062281 -1.18732352 0.62821530
C -0.81593564 -1.48451129 0.76677988
H 0.67999858 0.19592331 -1.00028033
H 0.67999730 0.94931172 0.61555547
H 2.10239374 0.06632262 0.04370027
H 1.21076637 -1.91583774 0.96788621
"""
from arc.plotter import draw_3d, show_sticks
from arc.species.converter import standardize_xyz_string, molecules_from_xyz
from arc.species.converter import xyz_to_pybel_mol, pybel_to_inchi
from IPython.display import display
xyz = standardize_xyz_string(xyz)
# Analyze graph
s_mol, b_mol = molecules_from_xyz(xyz)
mol = b_mol or s_mol
print mol.toSMILES()
print('\n')
print pybel_to_inchi(xyz_to_pybel_mol(xyz))
print('\n')
print mol.toAdjacencyList()
print('\n')
display(mol)
success = show_sticks(xyz=xyz)
draw_3d(xyz=xyz)
###Output
_____no_output_____ |
docs/source/example_solution_racetrack.ipynb | ###Markdown
Example Case using RacetrackBelow is an example of how to initialize the Racetrack environment and solve/compare with multiple solvers. Boilerplate If you're playing with things under the hood as you run these, autoreload is always useful...
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
If necessary, add directory containing lrl to path (workaround for if lrl is not installed as a package)
###Code
import sys
# Path to directory containing lrl
sys.path.append('../')
from lrl import environments, solvers
from lrl.utils import plotting
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Logging is used throughout lrl for basic info and debugging.
###Code
import logging
logging.basicConfig(format='%(asctime)s - %(name)s - %(funcName)s - %(levelname)s - %(message)s',
level=logging.INFO, datefmt='%H:%M:%S')
logger = logging.getLogger(__name__)
###Output
_____no_output_____
###Markdown
Initialize an Environment Initialize the 20x10 racetrack that includes some oily (stochastic) surfaces.**Note**: Make sure that your velocity limits suit your track. A track must have a grass padding around the entire course that prevents a car from trying to exit the track entirely, so if max(abs(vel))==3, you need 3 grass tiles around the outside perimeter of your map. For track, we have a 2-tile perimeter so velocity must be less than +-2.
###Code
# This will raise an error due to x_vel max limit
try:
rt = environments.get_racetrack(track='20x10_U',
x_vel_limits=(-2, 20), # Note high x_vel upper limit
y_vel_limits=(-2, 2),
x_accel_limits=(-2, 2),
y_accel_limits=(-2, 2),
max_total_accel=2,
)
except IndexError as e:
print("Caught the following error while building a track that shouldn't work:")
print(e)
print("")
# This will work
try:
rt = environments.get_racetrack(track='20x10_U',
x_vel_limits=(-2, 2),
y_vel_limits=(-2, 2),
x_accel_limits=(-2, 2),
y_accel_limits=(-2, 2),
max_total_accel=2,
)
print("But second track built perfectly!")
except:
print("Something went wrong, we shouldn't be here")
###Output
Caught the following error while building a track that shouldn't work:
Caught IndexError while building Racetrack. Likely cause is a max velocity that is creater than the wall padding around the track (leading to a car that can exit the track entirely)
But second track built perfectly!
###Markdown
Take a look at the track using plot_env
###Code
plotting.plot_env(env=rt)
###Output
_____no_output_____
###Markdown
There are also additional maps available - see the racetrack code base for more
###Code
print(f'Available tracks: {list(environments.racetrack.TRACKS.keys())}')
###Output
Available tracks: ['3x4_basic', '5x4_basic', '10x10', '10x10_basic', '10x10_all_oil', '15x15_basic', '20x20_basic', '20x20_all_oil', '30x30_basic', '20x10_U_all_oil', '20x10_U', '10x10_oil', '20x15_risky']
###Markdown
Tracks are simply lists of strings using a specific set of characters. See the racetrack code for more detail on how to make your own
###Code
for line in environments.racetrack.TRACKS['20x10_U']:
print(line)
###Output
GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGG OOOO GG
GGG GGGG GG
GGOOOOGGGGGGGGOOOOGG
GGOOOGGGGGGGGGGOOOGG
GG GGGGGGGG GG
GG SGGGGG GG
GGGGGGGGGGGGGGFFFFGG
GGGGGGGGGGGGGGFFFFGG
###Markdown
We can draw them using character art! For example, here is a custom track with more oil and a different shape than above...
###Code
custom_track = \
"""GGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGGGGGGGG
GGGOOOOOOOOOOOOOOOGG
GGG GGGG GG
GG GGGGGGGG GG
GGOOOOOOSGGGGGOOOOGG
GGGGGGGGGGGGGGFFFFGG
GGGGGGGGGGGGGGFFFFGG"""
custom_track = custom_track.split('\n')
rt_custom = environments.get_racetrack(track=custom_track,
x_vel_limits=(-2, 2),
y_vel_limits=(-2, 2),
x_accel_limits=(-2, 2),
y_accel_limits=(-2, 2),
max_total_accel=2,
)
plotting.plot_env(env=rt_custom)
###Output
_____no_output_____
###Markdown
Solve with Value Iteration and Interrogate Solution
###Code
rt_vi = solvers.ValueIteration(env=rt)
rt_vi.iterate_to_convergence()
###Output
16:33:21 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Solver iterating to convergence (Max delta in value function < 0.001 or iters>500)
16:33:23 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Solver converged to solution in 18 iterations
###Markdown
And we can then score our solution by running it multiple times through the environment
###Code
scoring_data = rt_vi.score_policy(iters=500)
###Output
_____no_output_____
###Markdown
score_policy returns a EpisodeStatistics object that contains details from each episode taken during the scoring. Easiest way to interact with it is grabbing data as a dataframe
###Code
print(f'type(scoring_data) = {type(scoring_data)}')
scoring_data_df = scoring_data.to_dataframe(include_episodes=True)
scoring_data_df.head(3)
scoring_data_df.tail(3)
###Output
_____no_output_____
###Markdown
Reward, Steps, and Terminal columns give data on that specific walk, whereas reward_mean, \_median, etc. columns give aggregate scores up until that walk. For example:
###Code
print(f'The reward obtained in the 499th episode was {scoring_data_df.loc[499, "reward"]}')
print(f'The mean reward obtained in the 0-499th episodes (inclusive) was {scoring_data_df.loc[499, "reward_mean"]}')
###Output
The reward obtained in the 499th episode was 91.0
The mean reward obtained in the 0-499th episodes (inclusive) was 90.476
###Markdown
And we can access the actual episode path for each episode
###Code
print(f'Episode 0 (directly) : {scoring_data.episodes[0]}')
print(f'Episode 0 (from the dataframe): {scoring_data_df.loc[0, "episodes"]}')
###Output
Episode 0 (directly) : [(8, 2, 0, 0), (6, 2, -2, 0), (5, 3, -1, 1), (5, 5, 0, 2), (7, 7, 2, 2), (9, 7, 2, 0), (11, 7, 2, 0), (13, 6, 2, -1), (15, 4, 2, -2), (15, 2, 0, -2), (14, 0, -1, -2)]
Episode 0 (from the dataframe): [(8, 2, 0, 0), (6, 2, -2, 0), (5, 3, -1, 1), (5, 5, 0, 2), (7, 7, 2, 2), (9, 7, 2, 0), (11, 7, 2, 0), (13, 6, 2, -1), (15, 4, 2, -2), (15, 2, 0, -2), (14, 0, -1, -2)]
###Markdown
Plotting Results And plot 100 randomly chosen episodes on the map, returned as a matplotlib axes
###Code
ax_episodes = plotting.plot_episodes(episodes=scoring_data.episodes, env=rt, max_episodes=100, )
###Output
_____no_output_____
###Markdown
score_policy also lets us use hypothetical scenario, such as what if we started in a different starting location. Let's try that by starting in the top left ((x,y) location (3, 7) (x=0 at left, y=0 at bot)) with a velocity of (1, -1), and plot it to our existing axes in red.
###Code
scoring_data_alternate = rt_vi.score_policy(iters=500, initial_state=(3, 7, -2, -2))
ax_episodes_with_alternate = plotting.plot_episodes(episodes=scoring_data_alternate.episodes, env=rt,
add_env_to_plot=False, color='r', ax=ax_episodes
# savefig='my_figure_file', # If you wanted the figure to save directly
# to file, use savefig (used throughout
# lrl's plotting scripts)
)
# Must get_figure because we're reusing the figure from above and jupyter wont automatically reshow it
ax_episodes_with_alternate.get_figure()
###Output
_____no_output_____
###Markdown
Where we can see the optimal policy takes a first action of (+2, 0) resulting in a first velocity of (0, -2) to avoid hitting the grass on step 1, then it redirects back up and around the track (although it sometimes slips on (3, 5) and drives down to (3, 3) before recovering) We can also look at the Value function and best policy for all states**NOTE**: Because state is (x, y, v_x, v_y), it is hard to capture all results on our (x, y) maps. plot_solver_results in this case plots a map for each (v_x, v_y) combination, with the axis title denoting which is the case.
###Code
# Sorry, this will plot 24 plots normally.
# To keep concise, we turn matplotlib inline plotting off then back on and selectively plot a few examples
%matplotlib agg
ax_results = plotting.plot_solver_results(env=rt, solver=rt_vi)
%matplotlib inline
###Output
C:\Users\Scribs\Anaconda3\envs\lrl\lib\site-packages\matplotlib\pyplot.py:514: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
max_open_warning, RuntimeWarning)
###Markdown
Plots are indexed by the additional state variables (v_x, v_y), so we can see the policy for v_x=0, v_y=0 like:
###Code
ax_results[(0, 0)].get_figure()
###Output
_____no_output_____
###Markdown
Where we can see the estimated value of the starting (light green) location is 30.59 (including discounting and costs for steps). We can also look at the policy for (-2, -2) (the velocity for our alternate start used above in red):
###Code
ax_results[(-2, -2)].get_figure()
###Output
_____no_output_____
###Markdown
Where we in the top left tile (3, 7) the optimal policy is (2, 0), just as we saw in our red path plot above. We can also see that tile (2, 2) (bottom left) has a -100 value because it is impossible to not crash with a (-2, -2) initialial velocity from tile (2, 2) given that our acceleration limit is 2. Solving with Policy Iteration and Comparing to Value Iteration We can also use other solvers
###Code
rt_pi = solvers.PolicyIteration(env=rt)
rt_pi.iterate_to_convergence()
###Output
16:33:37 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Solver iterating to convergence (1 iteration without change in policy or iters>500)
16:33:39 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Solver converged to solution in 6 iterations
###Markdown
We can look at how PI and VI converged relative to each other, comparing the maximum change in value function for each iteration
###Code
# (these are simple convenience functions for plotting, basically just recipes. See the plotting API)
# We can pass the solver..
ax = plotting.plot_solver_convergence(rt_vi, label='vi')
# Or going a little deeper into the API, with style being passed to matplotlib's plot function...
ax = plotting.plot_solver_convergence_from_df(rt_pi.iteration_data.to_dataframe(), y='delta_max', x='iteration', ax=ax, label='pi', ls='', marker='o')
ax.legend()
###Output
_____no_output_____
###Markdown
And looking at policy changes per iteration
###Code
# (these are simple convenience functions for plotting, basically just recipes. See the plotting API)
# We can pass the solver..
ax = plotting.plot_solver_convergence(rt_vi, y='policy_changes', label='vi')
# Or going a little deeper into the API...
ax = plotting.plot_solver_convergence_from_df(rt_pi.iteration_data.to_dataframe(), y='policy_changes', x='iteration', ax=ax, label='pi', ls='', marker='o')
ax.legend()
###Output
_____no_output_____
###Markdown
So we can see PI accomplishes more per iteration. But, is it faster? Let's look at time per iteration
###Code
# (these are simple convenience functions for plotting, basically just recipes. See the plotting API)
# We can pass the solver..
ax = plotting.plot_solver_convergence(rt_vi, y='time', label='vi')
# Or going a little deeper into the API...
ax = plotting.plot_solver_convergence_from_df(rt_pi.iteration_data.to_dataframe(), y='time', x='iteration', ax=ax, label='pi', ls='', marker='o')
ax.legend()
print(f'Total solution time for Value Iteration (excludes any scoring time): {rt_vi.iteration_data.to_dataframe().loc[:, "time"].sum():.2f}s')
print(f'Total solution time for Policy Iteration (excludes any scoring time): {rt_pi.iteration_data.to_dataframe().loc[:, "time"].sum():.2f}s')
###Output
Total solution time for Value Iteration (excludes any scoring time): 1.85s
Total solution time for Policy Iteration (excludes any scoring time): 2.26s
###Markdown
Solve with Q-Learning We can also use QLearning, although it needs a few parameters
###Code
# Let's be explicit with our QLearning settings for alpha and epsilon
alpha = 0.1 # Constant alpha during learning
# Decay function for epsilon (see QLearning() and decay_functions() in documentation for syntax)
# Decay epsilon linearly from 0.2 at timestep (iteration) 0 to 0.05 at timestep 1500,
# keeping constant at 0.05 for ts>1500
epsilon = {
'type': 'linear',
'initial_value': 0.2,
'initial_timestep': 0,
'final_value': 0.05,
'final_timestep': 1500
}
# Above PI/VI used the default gamma, but we will specify one here
gamma = 0.9
# Convergence is kinda tough to interpret automatically for Q-Learning. One good way to monitor convergence is to
# evaluate how good the greedy policy at a given point in the solution is and decide if it is still improving.
# We can enable this with score_while_training (available for Value and Policy Iteration as well)
# NOTE: During scoring runs, the solver is acting greedily and NOT learning from the environment. These are separate
# runs solely used to estimate solution progress
# NOTE: Scoring every 50 iterations is probably a bit much, but used to show a nice plot below. The default 500/500
# is probably a better general guidance
score_while_training = {
'n_trains_per_eval': 50, # Number of training episodes we run per attempt to score the greedy policy
# (eg: Here we do a scoring run after every 500 training episodes, where training episodes
# are the usual epsilon-greedy exploration episodes)
'n_evals': 250, # Number of times we run through the env with the greedy policy whenever we score
}
# score_while_training = True # This calls the default settings, which are also 500/500 like above
rt_ql = solvers.QLearning(env=rt, alpha=alpha, epsilon=epsilon, gamma=gamma,
max_iters=5000, score_while_training=score_while_training)
###Output
_____no_output_____
###Markdown
(note how long Q-Learning takes for this environment versus the planning algorithms)
###Code
rt_ql.iterate_to_convergence()
###Output
16:33:41 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Solver iterating to convergence (20 episodes with max delta in Q function < 0.1 or iters>5000)
16:33:41 - lrl.solvers.learners - iterate - INFO - Performing iteration (episode) 0 of Q-Learning
16:33:41 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 50
16:33:42 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.0, r_max = -103.0
16:33:42 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 100
16:33:43 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.0, r_max = -103.0
16:33:43 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 150
16:33:43 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.0, r_max = -103.0
16:33:44 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 200
16:33:44 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -102.0, r_max = -102.0
16:33:44 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 250
16:33:45 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.424, r_max = -102.0
16:33:45 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 300
16:33:46 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -100.0, r_max = -100.0
16:33:46 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 350
16:33:46 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.0, r_max = -103.0
16:33:46 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 400
16:33:47 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.224, r_max = -104.0
16:33:47 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 450
16:33:47 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.0, r_max = -103.0
16:33:48 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 500
16:33:48 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.22, r_max = -104.0
16:33:48 - lrl.solvers.learners - iterate - INFO - Performing iteration (episode) 500 of Q-Learning
16:33:48 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 550
16:33:49 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.264, r_max = -104.0
16:33:49 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 600
16:33:50 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.0, r_max = -104.0
16:33:50 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 650
16:33:50 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.0, r_max = -104.0
16:33:50 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 700
16:33:51 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.228, r_max = -104.0
16:33:51 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 750
16:33:52 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -106.0, r_max = -106.0
16:33:52 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 800
16:33:52 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.464, r_max = -104.0
16:33:53 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 850
16:33:53 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -105.0, r_max = -105.0
16:33:53 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 900
16:33:54 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -105.0, r_max = -105.0
16:33:54 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 950
16:33:54 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -106.824, r_max = -105.0
16:33:55 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1000
16:33:55 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.54, r_max = -104.0
16:33:55 - lrl.solvers.learners - iterate - INFO - Performing iteration (episode) 1000 of Q-Learning
16:33:55 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1050
16:33:56 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.0, r_max = -104.0
16:33:56 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1100
16:33:57 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -100.0, r_max = -100.0
16:33:57 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1150
16:33:57 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -103.796, r_max = -103.0
16:33:58 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1200
16:33:58 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -106.128, r_max = -106.0
16:33:58 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1250
16:33:59 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -104.0, r_max = -104.0
16:33:59 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1300
16:34:00 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -107.852, r_max = -107.0
16:34:00 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1350
16:34:00 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -105.416, r_max = -104.0
16:34:01 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1400
16:34:01 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -100.0, r_max = -100.0
16:34:02 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1450
16:34:02 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy achieved: r_mean = -108.08, r_max = -108.0
16:34:02 - lrl.solvers.base_solver - iterate_to_convergence - INFO - Current greedy policy being scored 250 times at iteration 1500
###Markdown
Like above, we can plot the number of policy changes per iteration. But this plot looks very different from above and shows one view of why Q-Learning takes many more iterations (each iteration accomplishes a lot less learning than a planning algorithm)
###Code
rt_ql_iter_df = rt_ql.iteration_data.to_dataframe()
rt_ql_iter_df.plot(x='iteration', y='policy_changes', kind='scatter', )
###Output
_____no_output_____
###Markdown
We can access the intermediate scoring through the scoring_summary (GeneralIterationData) and scoring_episode_statistics (EpisodeStatistics) objects
###Code
rt_ql_intermediate_scoring_df = rt_ql.scoring_summary.to_dataframe()
rt_ql_intermediate_scoring_df
plt.plot(rt_ql_intermediate_scoring_df.loc[:, 'iteration'], rt_ql_intermediate_scoring_df.loc[:, 'reward_mean'], '-o')
###Output
_____no_output_____
###Markdown
In this case we see somewhere between the 3500th and 4000th iteration the solver finds a solution and starts building a policy around it. This won't always be the case and learning may be more incremental And if we wanted to access the actual episodes that went into one of these datapoints, they're available in a dictionary of EpisodeStatistics objects here (keyed by iteration number):
###Code
i = 3750
print(f'EpisodeStatistics for the scoring at iter == {i}:\n')
rt_ql.scoring_episode_statistics[i].to_dataframe().head()
###Output
EpisodeStatistics for the scoring at iter == 3750:
|
stage2_Ref_Calls/Last_2minute_pdf_data.ipynb | ###Markdown
Imports and Installing
###Code
!pip install tabula-py==0.3.0
!pip install PyPDF2
%run ../imports.py
from bs4 import BeautifulSoup as bs
import requests, tabula, io, os, time
from tabula import read_pdf, convert_into
import datetime
from PyPDF2 import PdfFileReader, PdfFileWriter
import warnings
warnings.filterwarnings("ignore")
# from urllib.request import urlopen
# from tika import parser # tika had even more errors than tabula
# !pip install tika
# used tabula which is a Java wrapper since PyPDF2 and processing directly from pdf url didn't work
# With tabula, the url pdf is written to a temp pdf file then processed into a dataframe in Java
###Output
_____no_output_____
###Markdown
Extracting Last 2 Minutes Reports for March 2015 to June 2017
###Code
main_url1 = 'https://official.nba.com/nba-last-two-minute-reports-archive/'
main_url2 = 'https://official.nba.com/2017-18-nba-officiating-last-two-minute-reports/'
raw_headers = ['Period','Time Call Type','Committing Player','Disadvantaged Player','Unnamed: 4',
'Review Decision','Video','Date','Score']
df = pd.DataFrame(columns=raw_headers)
df_errors = pd.DataFrame(columns=raw_headers)
def extract_df(main_url):
res = requests.get(main_url)
soup = bs(res.content, 'lxml')
div = soup.find('div', {'class': 'entry-content'})
resume = True
c = 0
for row in div.find_all('p'):
c += 1
game = {}
links = row.find_all('a')
if links == []:
pass
else:
# only process page elements that have links in them (assumed href are only in <a> elements)
for link in links:
# if '4/2016/11/76ers-109-Pacers-105-OT.pdf' in link.attrs['href'] :
# print(link)
# resume = True
# elif '4/2015/06/L2M-GSW-CLE-6-9-15-FINALS-GAME-3.pdf' in link.attrs['href'] :
# print(link)
# resume = False
if resume:
score = link.text
score = score.replace(' ', '')
score = score.replace(',', '_')
link = link.attrs['href']
if 'https://ak-static.cms.nba.com/' in link:
# narrows results to only pdf game score links
try :
date = link.split('/')[-1]
date = date.split('-')[3:6]
# split 1 to 3 are 'L2M', home team, visiting team, after that are date values:
date_str = date[0] + '-' + date[1] + '-' + date[2].strip('.pdf')
date[2] = date[2].strip('.pdf')
file_name = score + '_' + date_str
except :
file_name = link
print(file_name)
try : raw_content = requests.get(link).content
except :
try :
time.sleep(5)
raw_content = requests.get(link).content
except :
time.sleep(45)
raw_content = requests.get(link).content
try :
# parsed = parser.from_file(link)
# print(parsed["content"]) # a lot of ordering errors with tika parser
with io.BytesIO(raw_content) as open_pdf_file:
pdfFile = PdfFileReader(open_pdf_file, strict=False)
temp_pdf = PdfFileWriter()
temp_pdf.appendPagesFromReader(pdfFile)
temp_pdf.write(open('temp_pdf.pdf', 'wb'))
df_game = tabula.read_pdf('temp_pdf.pdf') # some pdf errors with tabula
df_game['Date'] = datetime.date(int(date[2]), int(date[0]), int(date[1]))
df_game['Score'] = score
global df
global df_errors
# print(df_game.columns, len(df_game.columns))
# print((df.columns==df_game.columns).all())
if (len(df_game.columns) == len(df.columns)) & (df_game.columns == df.columns).all():
df = df.append(df_game, ignore_index=True)
else:
df_game.loc[-1,:] = df_game.columns
# df_game.columns[0:9] = df.columns[0:9]
df_errors = df_errors.append(df_game, ignore_index=True)
os.rename('temp_pdf.pdf', file_name + '.pdf')
except Exception as err :
try :
print(err)
print(df.shape)
print(df_game.shape)
except : pass
extract_df(main_url2)
df.info()
# df.to_csv("L2M_archive.csv", index=False, compression='gzip')
# df.to_csv("L2M_through_Raptors118_Knicks107_11-12-16.csv", index=False, compression='gzip')
###Output
_____no_output_____
###Markdown
Trying to Parse 17-18 Season PDF's which has New Structure & Graphics
###Code
# pdf's from 17-18 season onward have new graphics, must parse differently
df_game = tabula.read_pdf('temp_pdf.pdf')
df_game.head()
# Formatting and structuring after parsing is more frequently off
# Decision was to use just 15-16 and 16-17 sets and not 17-18 dataset
###Output
_____no_output_____
###Markdown
Last Automated Option to Parse PDF's with Errors
###Code
files = os.listdir('./data/')
df = pd.DataFrame(columns=raw_headers)
df_errors = pd.DataFrame(columns=raw_headers)
for file in files:
if '.pdf' not in file:
continue
else:
print(file)
try:
# most errors are from java parser using first data row as header, fixing those pdfs
df_game = tabula.read_pdf('./data/' + file)
df_game.loc[-1,:] = df_game.columns
file = file.strip('.pdf')
file = file.split('_')
date = file[2].split('-')
df_game['Date'] = datetime.date(int(date[2]), int(date[0]), int(date[1]))
df_game['Score'] = file[0] + '_' + file[1]
# print(df_game.columns)
df_game.columns = df.columns
if (len(df_game.columns) == len(df.columns)) & (df_game.columns == df.columns).all():
df = df.append(df_game, ignore_index=True)
except Exception as err :
print(err)
df.info()
df['Period'].value_counts()
df['Unnamed: 4'].value_counts()
df['Review Decision'].value_counts()
df['Video'].value_counts()
df.to_csv("L2M_errors_fixed.csv", index=False, compression='gzip')
###Output
_____no_output_____
###Markdown
Formatting Dataframe which Collected Tables with Obvious Errors
###Code
df = pd.read_csv("L2M_errors_fixed.csv", index_col=None, compression='gzip')
df.head()
df = pd.read_csv("L2M_errors_fixed.csv", index_col=None, compression='gzip')
review_types = ['CNC', 'CC', 'INC', 'IC']
foul_types = ['Personal', '24 Second', 'Shooting', 'Loose Ball', 'Offensive', 'Traveling']
# The major types of analyzable fouls, will only be investigating this subset
def align_to_columns(df):
debug = ''
try:
df_errors = pd.DataFrame()
if (df.columns != raw_headers).all():
print('column header error')
df.columns = raw_headers
# For loop to add all comment text to its matching row, the one above
for idx, row in df.iterrows():
if type(row['Period']) == float:
continue
elif 'Period' in row['Period']:
continue
elif (row['Period'] in 'Comment:'):
df_errors.loc[idx-1, 'Comment'] = row['Time Call Type']
continue
elif (row['Period'].split()[0] in 'Comment:'):
df_errors.loc[idx-1, 'Comment'] = str(row['Period']) + ' ' + str(row['Time Call Type'])
continue
for idx, row in df.iterrows():
# Processing column 'Period'
debug = row['Period']
if type(row['Period']) == float:
continue
elif 'Period' in row['Period']:
continue
elif row['Period'] in ['Q4','Q5','Q6']: # possible correct values for this column
df_errors.loc[idx, 'Period'] = row['Period']
elif ('Q4' in row['Period']):
df_errors.loc[idx, 'Period'] = 'Q4'
elif ('Q5' in row['Period']):
df_errors.loc[idx, 'Period'] = 'Q5'
elif ('Q6' in row['Period']):
df_errors.loc[idx, 'Period'] = 'Q6'
else:
continue
if (df_errors.loc[idx, 'Period'] in ['Q4','Q5','Q6']):
df_errors.loc[idx, 'Soup'] = ''
for cell in row.values:
df_errors.loc[idx, 'Soup'] += str(cell) + ' '
# Processing for column 'Time' and 'Foul' (foul type)
match = re.search('\d\d:\d\d.\d', df_errors.loc[idx, 'Soup'])
if match:
time = match.group()
df_errors.loc[idx, 'Time'] = time
for foul in foul_types:
if foul in df_errors.loc[idx, 'Soup']:
df_errors.loc[idx, 'Foul'] = foul
# Processing for columns 'Fouler' and 'Foulee'
name1 = str(df.loc[idx, 'Committing Player'])
name3 = str(df.loc[idx, 'Unnamed: 4']) # This column should be empty, except by parser error
name2 = str(df.loc[idx, 'Disadvantaged Player'])
foulee_done = False
# If name is in correct 2nd column it is always the disadvantaged player
if len(name2.split()) == 2: # If entry is two words will assume it is a name
df_errors.loc[idx, 'Foulee'] = name2
foulee_done = True
if len(name3.split()) == 2:
if foulee_done: # then name in this column is for fouler
df_errors.loc[idx, 'Fouler'] = name3
else:
df_errors.loc[idx, 'Foulee'] = name3
foulee_done = True
if len(name1.split()) == 2:
if foulee_done: # then name in this column is for fouler
df_errors.loc[idx, 'Fouler'] = name1
else:
df_errors.loc[idx, 'Foulee'] = name1
foulee_done = True
if not foulee_done: pass
# print('no name found: ', row)
review_done = False
for review_decision in review_types:
if review_decision in str(df_errors.loc[idx, 'Soup']):
df_errors.loc[idx, 'Review Decision'] = review_decision
review_done = True
if not review_done: # if parser moved review decision to row below in table
soup_temp = ''
for cell in df.loc[idx+1,:].values:
soup_temp += str(cell) + ' '
for review_decision in review_types:
if review_decision in soup_temp:
df_errors.loc[idx, 'Review Decision'] = review_decision
review_done = True
if not review_done:
if 'Observable in enhanced video' in str(df_errors.loc[idx, 'Comment']):
df_errors.loc[idx, 'Review Decision'] = 'PINC' # for Partial Incorrect No Call
df_errors.loc[idx, 'Date'] = row['Date']
df_errors.loc[idx, 'Score'] = row['Score']
except ValueError as err : print(debug, err)
return df_errors
df_errors = align_to_columns(df)
df_errors.info()
df_errors[df_errors.drop(['Time','Fouler','Comment'], axis=1).isnull().any(axis=1)]
df_errors['Period'].value_counts()
df_errors['Time'].sort_values()
df_errors['Foul'].value_counts()
df_errors['Review Decision'].value_counts()
df = df_errors[~df_errors[['Review Decision','Foulee','Comment']].isnull().any(axis=1)]
df.info()
df[df['Foul'].isnull()] # All the types of fouls of this project seem to have been extracted correctly
df = df[~df['Foul'].isnull()]
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3925 entries, 0 to 10033
Data columns (total 10 columns):
Comment 3925 non-null object
Period 3925 non-null object
Soup 3925 non-null object
Time 3924 non-null object
Foul 3925 non-null object
Foulee 3925 non-null object
Fouler 3586 non-null object
Review Decision 3925 non-null object
Date 3925 non-null object
Score 3925 non-null object
dtypes: object(10)
memory usage: 337.3+ KB
###Markdown
Formatting Mostly Correct Raw Dataframes
###Code
df2 = pd.read_csv("./data/L2M_through_Raptors118_Knicks107_11-12-16.csv", compression='gzip')
df3 = pd.read_csv("./data/L2M_archive.csv", compression='gzip')
df2 = df2.append(df3, ignore_index=True)
df2.info()
df2[~df2['Unnamed: 4'].isnull()]
def get_name(row):
if type(row['Disadvantaged Player']) == float:
return row['Unnamed: 4']
else:
return row['Disadvantaged Player']
df2['Disadvantaged Player'] = df2.apply(get_name, axis=1)
df2['Review Decision'].value_counts()
df2['Period'].value_counts()
df2.head()
def format_df(df):
try :
df.drop('Unnamed: 4', axis=1, inplace=True)
df.drop('Video', axis=1, inplace=True)
df.columns = ['Period','data','Fouler','Foulee','Review','Date','Score']
except : pass
for idx, row in df.iterrows():
if ('Comment:' in str(row['Period'])) | ('mment:' in str(row['Period'])):
df.loc[idx-1, 'Comment'] = row['data']
# print(df.loc[idx-1, 'Comment'])
df.drop(idx, axis=0, inplace=True)
for idx, row in df.iterrows():
df.loc[idx, 'Soup'] = ''
for cell in row.values:
df.loc[idx, 'Soup'] += str(cell) + ' '
# Processing for column 'Time' and 'Foul' (foul type)
match = re.search('\d\d:\d\d.\d', df.loc[idx, 'Soup'])
if match:
time = match.group()
df.loc[idx, 'Time'] = time
for foul in foul_types:
if foul in df.loc[idx, 'Soup']:
df.loc[idx, 'Foul'] = foul
review_done = (False if type(row['Review']) == float else True)
if not review_done: # if parser moved review decision to row below in table
soup_temp = ''
for cell in df.loc[idx+1,:].values:
soup_temp += str(cell) + ' '
for review_decision in review_types:
if review_decision in soup_temp:
df_errors.loc[idx, 'Review Decision'] = review_decision
review_done = True
if not review_done:
if 'Observable in enhanced video' in str(df.loc[idx, 'Comment']):
df_errors.loc[idx, 'Review Decision'] = 'PINC' # for Partial Incorrect No Call
# except Exception as err : print(err)
df2.info()
format_df(df2)
df2.info()
# Still many errors by tabula java function
df2['Foul'].value_counts()
df2['Review'].value_counts()
df2.to_csv('L2M_processed.csv', index=False, compression='gzip')
###Output
_____no_output_____ |
tutorials/Boos-Stefanski-Ch7.ipynb | ###Markdown
Chapter 7: M-Estimation (Estimating Equations)From Boos DD, & Stefanski LA. (2013). M-estimation (estimating equations). In Essential Statistical Inference (pp. 297-337). Springer, New York, NY.Examples of M-Estimation provided in that chapter are replicated here using `delicatessen`. Reading the chapter and looking at the corresponding implementations is likely to be the best approach to learning both the theory and application of M-Estimation.
###Code
# Initial setup
import numpy as np
import scipy as sp
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import delicatessen
from delicatessen import MEstimator
np.random.seed(80950841)
print("NumPy version: ", np.__version__)
print("SciPy version: ", sp.__version__)
print("Pandas version: ", pd.__version__)
print("Delicatessen version:", delicatessen.__version__)
# Generating a generic data set for following examples
n = 200
data = pd.DataFrame()
data['Y'] = np.random.normal(loc=10, scale=2, size=n)
data['X'] = np.random.normal(loc=5, size=n)
data['C'] = 1
###Output
_____no_output_____
###Markdown
7.2.2 Sample Mean and VarianceThe first example is the estimating equations for the mean and variance. Here, estimating equations for both the mean and variance are stacked together:$$\psi(Y_i, \theta) = \begin{bmatrix} Y_i - \theta_1\\ (Y_i - \theta_1)^2 - \theta_2\end{bmatrix} $$The top estimating equation is the mean, and the bottom estimating equation is the (asymptotic) variance. Here, both the by-hand and built-in estimating equations are demonstrated
###Code
def psi_mean_var(theta):
"""By-hand stacked estimating equations"""
return (data['Y'] - theta[0],
(data['Y'] - theta[0])**2 - theta[1])
estr = MEstimator(psi_mean_var, init=[0, 0])
estr.estimate()
print("=========================================================")
print("Mean & Variance")
print("=========================================================")
print("M-Estimation: by-hand")
print("Theta:", estr.theta)
print("Var: \n", estr.asymptotic_variance)
print("---------------------------------------------------------")
print("Closed-Form")
print("Mean: ", np.mean(data['Y']))
print("Var: ", np.var(data['Y'], ddof=0))
print("=========================================================")
###Output
=========================================================
Mean & Variance
=========================================================
M-Estimation: by-hand
Theta: [10.16284625 4.11208477]
Var:
[[ 4.11208409 -1.6740001 ]
[-1.6740001 36.16386652]]
---------------------------------------------------------
Closed-Form
Mean: 10.162846250198633
Var: 4.112084770881207
=========================================================
###Markdown
Notice that $\theta_2$ also matches the first element of the (asymptotic) variance matrix. These two values should match (since they are estimating the same thing). Further, as shown the closed-form solutions for the mean and variance are equal to the M-Estimation approach.The following uses the built-in estimating equation to estimate the mean and variance
###Code
from delicatessen.estimating_equations import ee_mean_variance
def psi_mean_var_default(theta):
"""Built-in stacked estimating equations"""
return ee_mean_variance(y=np.asarray(data['Y']), theta=theta)
estr = MEstimator(psi_mean_var_default, init=[0, 0])
estr.estimate()
print("=========================================================")
print("Mean & Variance")
print("=========================================================")
print("M-Estimation: built-in")
print("Theta:", estr.theta)
print("Var: \n", estr.asymptotic_variance)
print("=========================================================")
###Output
=========================================================
Mean & Variance
=========================================================
M-Estimation: built-in
Theta: [10.16284625 4.11208477]
Var:
[[ 4.11208409 -1.6740001 ]
[-1.6740001 36.16386652]]
=========================================================
###Markdown
7.2.3 Ratio EstimatorThe next example is a ratio estimator, which can be written as either a single estimating equation or as three stacked estimating equations. First is the single estimating equation version$$\psi(Y_i, \theta) = \begin{bmatrix} Y_i - X_i \times \theta_1\end{bmatrix} $$
###Code
def psi_ratio(theta):
return data['Y'] - data['X']*theta
estr = MEstimator(psi_ratio, init=[0, ])
estr.estimate()
print("=========================================================")
print("Ratio Estimator")
print("=========================================================")
print("M-Estimation: single estimating equation")
print("Theta:", estr.theta)
print("Var: ",estr.asymptotic_variance)
print("---------------------------------------------------------")
print("Closed-Form")
theta = np.mean(data['Y']) / np.mean(data['X'])
b = 1 / np.mean(data['X'])**2
c = np.mean((data['Y'] - theta*data['X'])**2)
var = b * c
print("Ratio:",theta)
print("Var: ",var)
print("=========================================================")
###Output
=========================================================
Ratio Estimator
=========================================================
M-Estimation: single estimating equation
Theta: [2.08234516]
Var: [[0.33842324]]
---------------------------------------------------------
Closed-Form
Ratio: 2.0823451609959682
Var: 0.33842329733168625
=========================================================
###Markdown
The next example is the ratio estimator consisting of 3 estimating equations. $$\psi(Y_i, \theta) = \begin{bmatrix} Y_i - \theta_1\\ X_i - \theta_2\\ \theta_1 - \theta_2 \theta_3\end{bmatrix} $$Note that the last element is the ratio. To keep the dimensions correct, the last element needs to be multiplied by an array of $n$ constants. This is done via the `np.ones` trick
###Code
def psi_ratio_three(theta):
return (data['Y'] - theta[0],
data['X'] - theta[1],
np.ones(data.shape[0])*theta[0] - theta[1]*theta[2])
estr = MEstimator(psi_ratio_three, init=[0.1, 0.1, 0.1])
estr.estimate()
print("=========================================================")
print("Ratio Estimator")
print("=========================================================")
print("M-Estimation: three estimating equations")
print("Theta:", estr.theta)
print("Var: \n", estr.asymptotic_variance)
print("=========================================================")
###Output
=========================================================
Ratio Estimator
=========================================================
M-Estimation: three estimating equations
Theta: [10.16284625 4.88048112 2.08234516]
Var:
[[ 4.11208409 0.04326813 0.82409594]
[ 0.04326813 0.95223623 -0.39742309]
[ 0.82409594 -0.39742309 0.33842314]]
=========================================================
###Markdown
7.2.4 Delta Method via M-EstimationM-estimation also allows for a generalization of the delta method. Below is an example with two transformations$$\psi(Y_i, \theta) = \begin{bmatrix} Y_i - \theta_1\\ (Y_i - \theta_1)^2 - \theta_2\\ \sqrt{\theta_2} - \theta_3\\ \log(\theta_2) - \theta_4\end{bmatrix} $$
###Code
def psi_delta(theta):
return (data['Y'] - theta[0],
(data['Y'] - theta[0])**2 - theta[1],
np.ones(data.shape[0])*np.sqrt(theta[1]) - theta[2],
np.ones(data.shape[0])*np.log(theta[1]) - theta[3])
estr = MEstimator(psi_delta, init=[1., 1., 1., 1.])
estr.estimate()
print("=========================================================")
print("Delta Method")
print("=========================================================")
print("M-Estimation")
print("Theta:", estr.theta)
print("Var: \n", estr.variance)
print("=========================================================")
###Output
=========================================================
Delta Method
=========================================================
M-Estimation
Theta: [10.16284625 4.11208477 2.0278276 1.41393014]
Var:
[[ 0.02056042 -0.00837 -0.00206379 -0.00203546]
[-0.00837 0.18081933 0.04458452 0.04397267]
[-0.00206379 0.04458452 0.01099318 0.01084232]
[-0.00203546 0.04397267 0.01084232 0.01069352]]
=========================================================
###Markdown
7.2.6 Instrumental Variable EstimationTwo variations on the estimating equations for instrumental variable analyses. $X$ is the exposure of interest, $X^*$ is the mismeasured or observed values of $X$, $I$ is the instrument for $X$, and $Y$ is the outcome of interest. We are interested in estimating $\beta_1$ of:$$Y_i = \beta_0 + \beta_1 X_i + e_{i,j}$$Since $X^*$ is mismeasured, we can't immediately estimated $\beta_1$. Instead, we need to use an instrumental variable approach. Below is some generated data consistent with this measurment error story:
###Code
# Generating some data
n = 500
data = pd.DataFrame()
data['X'] = np.random.normal(size=n)
data['Y'] = 0.5 + 2*data['X'] + np.random.normal(loc=0, size=n)
data['X-star'] = data['X'] + np.random.normal(loc=0, size=n)
data['T'] = -0.75 - 1*data['X'] + np.random.normal(loc=0, size=n)
###Output
_____no_output_____
###Markdown
The estimating equations are$$\psi(Y_i,X_i^*,T_i, \theta) = \begin{bmatrix} T_i - \theta_1\\ (Y_i - \theta_2X_i^*)(\theta_1 - T_i)\end{bmatrix} $$where $\theta_1$ is the mean of the instrument, and $\theta_2$ corresponds to $\beta_1$
###Code
def psi_instrument(theta):
return (theta[0] - data['T'],
(data['Y'] - data['X-star']*theta[1])*(theta[0] - data['T']))
estr = MEstimator(psi_instrument, init=[0.1, 0.1])
estr.estimate()
print("=========================================================")
print("Instrumental Variable")
print("=========================================================")
print("M-Estimation")
print("Theta:", estr.theta)
print("Var: \n", estr.variance)
print("=========================================================")
###Output
=========================================================
Instrumental Variable
=========================================================
M-Estimation
Theta: [-0.89989957 2.01777751]
Var:
[[ 0.00430115 -0.0006694 ]
[-0.0006694 0.023841 ]]
=========================================================
###Markdown
Another set of estimating equations for this instrumental variable approach is$$\psi(Y_i,X_i^*,T_i, \theta) = \begin{bmatrix} T_i - \theta_1\\ \theta_2 - X_i^* \\ (Y_i - \theta_3 X_i^*)(\theta_2 - X_i^*)\\ (Y_i - \theta_4 X_i^*)(\theta_1 - T_i)\end{bmatrix} $$This set of estimating equations further allows for inference on the difference between $\beta_1$ minus the coefficient for $Y$ given $X^*$. Here, $\theta_1$ is the mean of the instrument, $\theta_2$ is the mean of the mismeasured value of $X$, and $\theta_3$ corresponds to the coefficient for $Y$ given $X^*$, and $\theta_4$ is $\beta_1$
###Code
def psi(theta):
return (theta[0] - data['T'],
theta[1] - data['X-star'],
(data['Y'] - data['X-star']*theta[2])*(theta[1] - data['X-star']),
(data['Y'] - data['X-star']*theta[3])*(theta[0] - data['T'])
)
estr = MEstimator(psi, init=[0.1, 0.1, 0.1, 0.1])
estr.estimate()
print("=========================================================")
print("Instrumental Variable")
print("=========================================================")
print("M-Estimation")
print("Theta:", estr.theta)
print("Var: \n", estr.variance)
print("=========================================================")
###Output
=========================================================
Instrumental Variable
=========================================================
M-Estimation
Theta: [-0.89989957 0.02117577 0.95717618 2.01777751]
Var:
[[ 0.00430115 -0.00207361 -0.00011136 -0.0006694 ]
[-0.00207361 0.0041239 0.00023703 0.00039778]
[-0.00011136 0.00023703 0.00302462 0.00171133]
[-0.0006694 0.00039778 0.00171133 0.023841 ]]
=========================================================
###Markdown
7.4.1 Robust Location EstimationThe robust location estimator reduces the influence of outliers by applying bounds. The robust mean with a simple bounding function is $$\psi(Y_i, \theta_1) = g_k(Y_i) - \theta_1$$where $k$ indicates the bound, such that if $Y_i>k$ then $k$, or $Y_i<-k$ then $-k$, otherwise $Y_i$. Below is the estimating equation translated into code
###Code
# Generating some generic data
y = np.random.normal(size=250)
n = y.shape[0]
def psi_robust_mean(theta):
k = 3 # Bound value
yr = np.where(y > k, k, y) # Applying upper bound
yr = np.where(y < -k, -k, y) # Applying lower bound
return yr - theta
estr = MEstimator(psi_robust_mean, init=[0.])
estr.estimate()
print("=========================================================")
print("Robust Location Estimation")
print("=========================================================")
print("M-Estimation")
print("Theta:", estr.theta)
print("Var: \n", estr.variance)
print("=========================================================")
###Output
=========================================================
Robust Location Estimation
=========================================================
M-Estimation
Theta: [0.03056108]
Var:
[[0.00370521]]
=========================================================
###Markdown
7.4.2 Quantile EstimationThe previous estimating equation was non-smooth. Here, we consider another non-smooth function: quantiles. The estimating equations for quantiles are$$\psi_q(Y_i, \theta_1) = q - I(Y_i <= \theta_1)$$ Due to the non-smooth nature, this estimating equation can be a little difficult to optimizer. Consider if the median is actually 0.55 but the next observation in terms of value is 0.57. This means that the estimating equation is 0 for all values between $[0.55, 0.57)$. To handle these issues in `delicatessen`, we will use a different root-finding algorithm, and a higher tolerance value. Furthermore, we adjust the `dx` and `order` values for the numerical approximation of the derivative
###Code
def psi_quantile(theta):
return (0.25 - 1*(y <= theta[0]),
0.50 - 1*(y <= theta[1]),
0.75 - 1*(y <= theta[2]),)
estr = MEstimator(psi_quantile, init=[0., 0., 0.])
estr.estimate(solver='hybr', # Selecting the hybr method
tolerance=1e-3, # Increasing the tolerance
dx=0.1, # Increasing distance for numerical approx
order=25) # Increasing the number of points for numerical approx
print("=========================================================")
print("Quantiles")
print("=========================================================")
print("M-Estimation")
print("Theta:", estr.theta)
print("Var: \n", estr.variance)
print("---------------------------------------------------------")
print("Closed-Form")
print(np.quantile(y, q=[0.25, 0.50, 0.75]))
print("=========================================================")
###Output
=========================================================
Quantiles
=========================================================
M-Estimation
Theta: [-0.71064081 0.10309617 0.67848129]
Var:
[[0.02106748 0.00668387 0.00409554]
[0.00668387 0.00687221 0.00420896]
[0.00409554 0.00420896 0.00765179]]
---------------------------------------------------------
Closed-Form
[-0.65124657 0.10838232 0.68413976]
=========================================================
###Markdown
Notice that there is a difference between the M-Estimation and the closed-form. This occurs for the reason described above. Furthermore, the variances are sensitive to changes in `dx`, so they should not be considered reliable. 7.4.3 Positive Mean DeviationA related parameter is the positive mean deviation, which describes the mean for observations above the median. The estimating equations are$$\psi(Y_i; \theta) = \begin{bmatrix} 2(Y_i - \theta_2)I(Y_i > \theta_2) - \theta_1\\ 0.5 - I(Y_i \le \theta_2)\end{bmatrix}$$As before, the median is non-smooth. Therefore, we need to revise the parameters and our optimization approach.
###Code
def psi_deviation(theta):
return ((2 * (y - theta[1]) * (y > theta[1])) - theta[0],
1/2 - (y <= theta[1]), )
estr = MEstimator(psi_deviation, init=[0.0, 0.0, ])
estr.estimate(solver='hybr', # Selecting the hybr method
tolerance=1e-3, # Increasing the tolerance
dx=0.1, # Increasing distance for numerical approx
order=25) # Increasing the number of points for numerical approx
print("=========================================================")
print("Positive Mean Deviation")
print("=========================================================")
print("M-Estimation")
print(estr.theta)
print(estr.variance)
print("---------------------------------------------------------")
print("Closed-Form")
median = np.median(y)
v = 2/n * np.sum((y - median) * (y > median))
print(v, median)
print("=========================================================")
###Output
=========================================================
Positive Mean Deviation
=========================================================
M-Estimation
[0.69144095 0.10605091]
[[ 0.00381667 -0.002829 ]
[-0.002829 0.00625453]]
---------------------------------------------------------
Closed-Form
0.6891092946789333 0.10838231523145117
=========================================================
###Markdown
7.5.1 Linear Model with Random $X$Next, we can run a linear regression model. Note that the variance here is robust (to violations of the homoscedastic assumption). Note that we need to manually add an intercept (the column `C` in the data). As comparison, we provide the equivalent using `statsmodels` generalized linear model with heteroscedastic-corrected variances.
###Code
n = 500
data = pd.DataFrame()
data['X'] = np.random.normal(size=n)
data['Z'] = np.random.normal(size=n)
data['Y'] = 0.5 + 2*data['X'] - 1*data['Z'] + np.random.normal(size=n)
data['C'] = 1
def psi_regression(theta):
x = np.asarray(data[['C', 'X', 'Z']])
y = np.asarray(data['Y'])[:, None]
beta = np.asarray(theta)[:, None]
return ((y - np.dot(x, beta)) * x).T
mestimator = MEstimator(psi_regression, init=[0.1, 0.1, 0.1])
mestimator.estimate()
print("=========================================================")
print("Linear Model")
print("=========================================================")
print("M-Estimation: by-hand")
print(mestimator.theta)
print(mestimator.variance)
print("---------------------------------------------------------")
print("GLM Estimator")
glm = smf.glm("Y ~ X + Z", data).fit(cov_type="HC1")
print(np.asarray(glm.params))
print(np.asarray(glm.cov_params()))
print("=========================================================")
###Output
=========================================================
Linear Model
=========================================================
M-Estimation: by-hand
[ 0.41082601 1.96289222 -1.02663555]
[[ 2.18524079e-03 7.28169639e-05 1.54216620e-04]
[ 7.28169639e-05 2.08315655e-03 -4.09519996e-05]
[ 1.54216620e-04 -4.09519996e-05 2.14573736e-03]]
---------------------------------------------------------
GLM Estimator
[ 0.41082601 1.96289222 -1.02663555]
[[ 2.18524092e-03 7.28169947e-05 1.54216630e-04]
[ 7.28169947e-05 2.08315690e-03 -4.09519947e-05]
[ 1.54216630e-04 -4.09519947e-05 2.14573770e-03]]
=========================================================
###Markdown
The following uses the built-in linear regression functionality
###Code
from delicatessen.estimating_equations import ee_linear_regression
def psi_regression(theta):
return ee_linear_regression(theta=theta,
X=data[['C', 'X', 'Z']],
y=data['Y'])
mestimator = MEstimator(psi_regression, init=[0.1, 0.1, 0.1])
mestimator.estimate()
print("=========================================================")
print("Linear Model")
print("=========================================================")
print("M-Estimation: built-in")
print(mestimator.theta)
print(mestimator.variance)
print("=========================================================")
###Output
=========================================================
Linear Model
=========================================================
M-Estimation: built-in
[ 0.41082601 1.96289222 -1.02663555]
[[ 2.18524079e-03 7.28169639e-05 1.54216620e-04]
[ 7.28169639e-05 2.08315655e-03 -4.09519996e-05]
[ 1.54216620e-04 -4.09519996e-05 2.14573736e-03]]
=========================================================
###Markdown
7.5.4 Robust Regression
###Code
def psi_robust_regression(theta):
k = 1.345
x = np.asarray(data[['C', 'X', 'Z']])
y = np.asarray(data['Y'])[:, None]
beta = np.asarray(theta)[:, None]
preds = np.clip(y - np.dot(x, beta), -k, k)
return (preds * x).T
mestimator = MEstimator(psi_robust_regression, init=[0.5, 2., -1.])
mestimator.estimate()
print("=========================================================")
print("Linear Model")
print("=========================================================")
print("M-Estimation: by-hand")
print(mestimator.theta)
print(mestimator.variance)
print("=========================================================")
###Output
=========================================================
Linear Model
=========================================================
M-Estimation: by-hand
[ 0.41223641 1.95577495 -1.02508413]
[[ 2.31591834e-03 1.82106059e-04 2.57209790e-04]
[ 1.82106059e-04 2.12098791e-03 -6.95782387e-05]
[ 2.57209790e-04 -6.95782387e-05 2.38212555e-03]]
=========================================================
###Markdown
The following uses the built-in robust linear regression functionality
###Code
from delicatessen.estimating_equations import ee_robust_linear_regression
def psi_robust_regression(theta):
return ee_robust_linear_regression(theta=theta,
X=data[['C', 'X', 'Z']],
y=data['Y'],
k=1.345)
mestimator = MEstimator(psi_robust_regression, init=[0.5, 2., -1.])
mestimator.estimate()
print("=========================================================")
print("Linear Model")
print("=========================================================")
print("M-Estimation: built-in")
print(mestimator.theta)
print(mestimator.variance)
print("=========================================================")
###Output
=========================================================
Linear Model
=========================================================
M-Estimation: built-in
[ 0.41223641 1.95577495 -1.02508413]
[[ 2.31591834e-03 1.82106059e-04 2.57209790e-04]
[ 1.82106059e-04 2.12098791e-03 -6.95782387e-05]
[ 2.57209790e-04 -6.95782387e-05 2.38212555e-03]]
=========================================================
|
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Updating a Model) - Solution.ipynb | ###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-05-26 19:21:30-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 24.8MB/s in 3.6s
2020-05-26 19:21:34 (22.6 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2020-05-26 19:27:18 Starting - Starting the training job...
2020-05-26 19:27:20 Starting - Launching requested ML instances......
2020-05-26 19:28:23 Starting - Preparing the instances for training...
2020-05-26 19:29:13 Downloading - Downloading input data...
2020-05-26 19:29:45 Training - Downloading the training image...
2020-05-26 19:30:05 Training - Training image download completed. Training in progress.[34mArguments: train[0m
[34m[2020-05-26:19:30:05:INFO] Running standalone xgboost training.[0m
[34m[2020-05-26:19:30:05:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8468.55mb[0m
[34m[2020-05-26:19:30:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[19:30:05] S3DistributionType set as FullyReplicated[0m
[34m[19:30:07] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-26:19:30:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[19:30:07] S3DistributionType set as FullyReplicated[0m
[34m[19:30:08] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[19:30:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.2958#011validation-error:0.3024[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[19:30:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.280533#011validation-error:0.2869[0m
[34m[19:30:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.2682#011validation-error:0.2755[0m
[34m[19:30:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.265267#011validation-error:0.2745[0m
[34m[19:30:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.2544#011validation-error:0.2655[0m
[34m[19:30:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.250933#011validation-error:0.2622[0m
[34m[19:30:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.246733#011validation-error:0.2568[0m
[34m[19:30:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.242533#011validation-error:0.2529[0m
[34m[19:30:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.235133#011validation-error:0.2455[0m
[34m[19:30:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.2306#011validation-error:0.2416[0m
[34m[19:30:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.2244#011validation-error:0.2382[0m
[34m[19:30:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.216#011validation-error:0.2303[0m
[34m[19:30:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.211333#011validation-error:0.2272[0m
[34m[19:30:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.2078#011validation-error:0.2248[0m
[34m[19:30:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.204#011validation-error:0.22[0m
[34m[19:30:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.200533#011validation-error:0.2195[0m
[34m[19:30:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.196133#011validation-error:0.2175[0m
[34m[19:30:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.194333#011validation-error:0.2148[0m
[34m[19:30:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.1922#011validation-error:0.2114[0m
[34m[19:30:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.188533#011validation-error:0.2089[0m
[34m[19:30:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.186267#011validation-error:0.205[0m
[34m[19:30:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.181867#011validation-error:0.2055[0m
[34m[19:30:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.179333#011validation-error:0.203[0m
[34m[19:30:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.1768#011validation-error:0.2[0m
[34m[19:30:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.1746#011validation-error:0.1981[0m
[34m[19:30:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.1716#011validation-error:0.1947[0m
[34m[19:30:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.170933#011validation-error:0.195[0m
[34m[19:30:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.168133#011validation-error:0.1916[0m
[34m[19:30:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.164933#011validation-error:0.1922[0m
[34m[19:30:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.162667#011validation-error:0.1918[0m
[34m[19:30:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.162933#011validation-error:0.1915[0m
[34m[19:30:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.1606#011validation-error:0.189[0m
[34m[19:30:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.1602#011validation-error:0.187[0m
[34m[19:30:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.1572#011validation-error:0.1874[0m
[34m[19:30:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.155867#011validation-error:0.1875[0m
[34m[19:30:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.154867#011validation-error:0.1853[0m
[34m[19:30:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.153867#011validation-error:0.1842[0m
[34m[19:30:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.1524#011validation-error:0.1824[0m
[34m[19:31:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.151267#011validation-error:0.1794[0m
[34m[19:31:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.149533#011validation-error:0.1785[0m
[34m[19:31:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.147733#011validation-error:0.1772[0m
[34m[19:31:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.1462#011validation-error:0.1769[0m
[34m[19:31:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[42]#011train-error:0.144733#011validation-error:0.1761[0m
[34m[19:31:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[43]#011train-error:0.144867#011validation-error:0.1758[0m
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
....................[34mArguments: serve[0m
[34m[2020-05-26 19:38:33 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-05-26 19:38:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-05-26 19:38:33 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-05-26 19:38:33 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-05-26 19:38:33 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-05-26:19:38:33:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-05-26 19:38:33 +0000] [41] [INFO] Booting worker with pid: 41[0m
[34m[2020-05-26 19:38:33 +0000] [42] [INFO] Booting worker with pid: 42[0m
[34m[2020-05-26:19:38:33:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-05-26:19:38:33:INFO] Model loaded successfully for worker : 41[0m
[34m[2020-05-26:19:38:33:INFO] Model loaded successfully for worker : 42[0m
[34m[2020-05-26:19:38:59:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:38:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:38:59:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:38:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:00:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:00:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:38:59:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:38:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:38:59:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:38:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:00:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:00:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:00:INFO] Determined delimiter of CSV input is ','[0m
[32m2020-05-26T19:38:56.796:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:05:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:05:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:10:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:10:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:12:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:12:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:12:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:12:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:12:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:12:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:12:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:12:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:12:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:14:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:14:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:14:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:14:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:16:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:16:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:16:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:17:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:17:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:17:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:17:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:39:17:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:39:17:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:17:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:17:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:17:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:17:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:39:17:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:39:17:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/370.6 KiB (3.8 MiB/s) with 1 file(s) remaining
Completed 370.6 KiB/370.6 KiB (5.3 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-283474268096/xgboost-2020-05-26-19-35-33-307/test.csv.out to ../data/sentiment_update/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
....................[34mArguments: serve[0m
[34m[2020-05-26 19:45:24 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-05-26 19:45:24 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-05-26 19:45:24 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-05-26 19:45:24 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-05-26 19:45:24 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-05-26 19:45:24 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-05-26 19:45:24 +0000] [41] [INFO] Booting worker with pid: 41[0m
[34m[2020-05-26:19:45:24:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-05-26:19:45:24:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-05-26:19:45:24:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-05-26:19:45:24:INFO] Model loaded successfully for worker : 41[0m
[32m2020-05-26T19:45:46.978:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-05-26:19:45:49:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:50:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:49:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:50:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:50:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:50:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:50:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:50:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:52:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:52:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:52:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:52:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:52:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:52:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:53:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:53:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:53:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:55:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:55:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:55:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:55:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:55:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:55:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:45:58:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:45:58:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:45:58:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:45:58:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:05:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:05:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:05:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:05:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-26:19:46:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-26:19:46:07:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: the following arguments are required: paths
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
WARNING:sagemaker:Using already existing model: xgboost-2020-05-26-19-27-17-975
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
(['basic', 'watch', 'someth', 'make', 'sens', 'spoil', 'film', 'peopl', 'actual', 'want', 'take', 'look', 'flick', 'explain', 'stori', 'normal', 'everyday', 'day', 'women', 'walk', 'street', 'find', 'drive', 'car', 'follow', 'mani', 'event', 'take', 'place', 'time', 'includ', 'famili', 'specif', 'made', 'account', 'comment', 'film', 'horribl', 'written', 'act', 'great', 'event', 'great', 'stori', 'brought', 'nowher', 'could', 'ad', 'tremend', 'made', 'worldwid', 'epidem', 'sure', 'writer', 'tri', 'accomplish', 'make', 'usual', 'end', 'film', 'question', 'get', 'answer', 'film', 'ask', 'happen', '1', 'hour', '20', 'minut', 'pass', 'noth', 'spoiler', 'start', 'area', '2', 'dimens', 'behind', 'glass', 'would', 'come', 'world', 'kill', 'us', 'elabor', 'film', 'never', 'know', 'happen', 'happen', 'noth', 'get', 'explain', 'film', 'main', 'charact', 'even', 'main', 'charact', 'end', 'film', 'guy', 'final', 'figur', 'run', 'away', 'sister', 'boyfriend', 'main', 'charact', 'sadli', 'movi', 'end', '20', 'second', 'bought', 'movi', '10', 'threw', 'right', 'wast', 'time', 'realli', 'hope', 'noth', 'like', 'made', 'banana'], 1)
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
{'spill', 'reincarn', 'ghetto', '21st', 'playboy', 'victorian', 'weari'}
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
{'optimist', 'masterson', 'dubiou', 'sophi', 'banana', 'orchestr', 'omin'}
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-10-16 21:26:42-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.6MB/s in 5.0s
2020-10-16 21:26:47 (16.2 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2020-10-16 21:28:22 Starting - Starting the training job...
2020-10-16 21:28:24 Starting - Launching requested ML instances...
2020-10-16 21:29:23 Starting - Preparing the instances for training......
2020-10-16 21:30:09 Downloading - Downloading input data...
2020-10-16 21:30:50 Training - Training image download completed. Training in progress..[34mArguments: train[0m
[34m[2020-10-16:21:30:50:INFO] Running standalone xgboost training.[0m
[34m[2020-10-16:21:30:50:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8475.16mb[0m
[34m[2020-10-16:21:30:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:30:50] S3DistributionType set as FullyReplicated[0m
[34m[21:30:52] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-10-16:21:30:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:30:52] S3DistributionType set as FullyReplicated[0m
[34m[21:30:53] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[21:30:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.296933#011validation-error:0.2961[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[21:30:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.280133#011validation-error:0.2824[0m
[34m[21:31:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.274867#011validation-error:0.2753[0m
[34m[21:31:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.2702#011validation-error:0.2711[0m
[34m[21:31:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.258467#011validation-error:0.2572[0m
[34m[21:31:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.2468#011validation-error:0.2483[0m
[34m[21:31:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.243267#011validation-error:0.2454[0m
[34m[21:31:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.239067#011validation-error:0.2446[0m
[34m[21:31:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.2356#011validation-error:0.2408[0m
[34m[21:31:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.2276#011validation-error:0.234[0m
[34m[21:31:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.220867#011validation-error:0.2294[0m
[34m[21:31:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.218733#011validation-error:0.2242[0m
[34m[21:31:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.215933#011validation-error:0.223[0m
[34m[21:31:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.2126#011validation-error:0.2208[0m
[34m[21:31:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.204933#011validation-error:0.2186[0m
[34m[21:31:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.2036#011validation-error:0.2174[0m
[34m[21:31:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.201467#011validation-error:0.2126[0m
[34m[21:31:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.195867#011validation-error:0.2089[0m
[34m[21:31:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.191333#011validation-error:0.2038[0m
[34m[21:31:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.188533#011validation-error:0.2008[0m
[34m[21:31:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.185867#011validation-error:0.1997[0m
[34m[21:31:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.182867#011validation-error:0.1978[0m
[34m[21:31:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.180133#011validation-error:0.1975[0m
[34m[21:31:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.178667#011validation-error:0.1948[0m
[34m[21:31:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.177133#011validation-error:0.195[0m
[34m[21:31:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.1752#011validation-error:0.193[0m
[34m[21:31:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.172467#011validation-error:0.1925[0m
[34m[21:31:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.170067#011validation-error:0.1895[0m
[34m[21:31:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.167333#011validation-error:0.1883[0m
[34m[21:31:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.164133#011validation-error:0.1873[0m
[34m[21:31:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.164067#011validation-error:0.1856[0m
[34m[21:31:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.161933#011validation-error:0.1857[0m
[34m[21:31:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.161733#011validation-error:0.1841[0m
[34m[21:31:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.1594#011validation-error:0.1824[0m
[34m[21:31:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.157133#011validation-error:0.1805[0m
[34m[21:31:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.156267#011validation-error:0.1804[0m
[34m[21:31:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.154933#011validation-error:0.1796[0m
[34m[21:31:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.153733#011validation-error:0.1781[0m
[34m[21:31:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.151667#011validation-error:0.1769[0m
[34m[21:31:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.15#011validation-error:0.1765[0m
[34m[21:31:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.149667#011validation-error:0.1768[0m
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
............................[32m2020-10-16T21:39:34.789:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34mArguments: serve[0m
[34m[2020-10-16 21:39:34 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-10-16 21:39:34 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-10-16 21:39:34 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-10-16 21:39:34 +0000] [37] [INFO] Booting worker with pid: 37[0m
[34m[2020-10-16 21:39:34 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 37[0m
[34m[2020-10-16 21:39:34 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-10-16 21:39:34 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[35mArguments: serve[0m
[35m[2020-10-16 21:39:34 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[35m[2020-10-16 21:39:34 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2020-10-16 21:39:34 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2020-10-16 21:39:34 +0000] [37] [INFO] Booting worker with pid: 37[0m
[35m[2020-10-16 21:39:34 +0000] [38] [INFO] Booting worker with pid: 38[0m
[35m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 37[0m
[35m[2020-10-16 21:39:34 +0000] [39] [INFO] Booting worker with pid: 39[0m
[35m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 38[0m
[35m[2020-10-16 21:39:34 +0000] [40] [INFO] Booting worker with pid: 40[0m
[35m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 39[0m
[35m[2020-10-16:21:39:34:INFO] Model loaded successfully for worker : 40[0m
[35m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:35:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:35:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:37:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:37:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:37:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:37:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:37:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:37:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:38:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:38:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:40:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:42:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:42:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:43:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:43:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:42:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:42:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:43:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:43:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:47:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:47:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:47:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:47:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:47:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:47:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:47:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:47:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:47:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:47:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:48:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:47:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:47:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:48:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:50:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:52:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:52:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:52:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:39:53:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:39:53:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:52:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:52:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:52:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:39:53:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:39:53:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/370.6 KiB (4.3 MiB/s) with 1 file(s) remaining
Completed 370.6 KiB/370.6 KiB (6.1 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-501357048875/xgboost-2020-10-16-21-35-06-745/test.csv.out to ../data/sentiment_update/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
..............................[32m2020-10-16T21:45:43.276:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34mArguments: serve[0m
[35mArguments: serve[0m
[34m[2020-10-16 21:45:43 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-10-16 21:45:43 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-10-16 21:45:43 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-10-16 21:45:43 +0000] [37] [INFO] Booting worker with pid: 37[0m
[34m[2020-10-16 21:45:43 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-10-16 21:45:43 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 37[0m
[34m[2020-10-16 21:45:43 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 40[0m
[35m[2020-10-16 21:45:43 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[35m[2020-10-16 21:45:43 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2020-10-16 21:45:43 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2020-10-16 21:45:43 +0000] [37] [INFO] Booting worker with pid: 37[0m
[35m[2020-10-16 21:45:43 +0000] [38] [INFO] Booting worker with pid: 38[0m
[35m[2020-10-16 21:45:43 +0000] [39] [INFO] Booting worker with pid: 39[0m
[35m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 37[0m
[35m[2020-10-16 21:45:43 +0000] [40] [INFO] Booting worker with pid: 40[0m
[35m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 38[0m
[35m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 39[0m
[35m[2020-10-16:21:45:43:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-10-16:21:45:43:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:43:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:43:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:43:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:43:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:43:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:45:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:45:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:45:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:46:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:46:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:46:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:46:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:46:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:46:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:47:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:47:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:47:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:47:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:48:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:48:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:48:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:48:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:48:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:48:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:49:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:49:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:50:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:50:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:51:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:51:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:51:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:51:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:51:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:51:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:51:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:51:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:52:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:52:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:52:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:52:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:54:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:54:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:54:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:54:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:56:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:45:59:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:45:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:01:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:03:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:03:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:04:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:04:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:04:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:04:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:04:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:04:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:06:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:06:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:21:46:06:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:21:46:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:06:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:06:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:21:46:06:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:21:46:06:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/370.9 KiB (4.0 MiB/s) with 1 file(s) remaining
Completed 370.9 KiB/370.9 KiB (5.7 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-501357048875/xgboost-2020-10-16-21-40-48-383/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
Using already existing model: xgboost-2020-10-16-21-28-22-569
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
(['never', 'danc', 'flamenco', 'somehow', 'feel', 'like', 'movi', 'perfect', 'color', 'blatant', 'danc', 'gypsi', 'rival', 'put', 'togeth', 'made', 'movi', 'seem', 'end', 'soon', 'seen', 'carlo', 'saura', 'movi', 'agre', 'film', 'may', 'best', 'product', 'feel', 'best', 'characterist', 'past', 'film', 'put', 'togeth', 'align', 'make', 'iberia', 'appreci', 'use', 'mirror', 'reveal', 'activ', 'go', 'behind', 'camera', 'watch', 'movi', 'felt', 'like', 'sit', 'small', 'restaur', 'madrid', 'comfort', 'watch', 'dancer', 'bang', 'wooden', 'plank', 'delici', 'fruit', 'cocktail', 'movi', 'fit', 'like', 'glove', 'know', 'abl', 'get', 'copi', 'film', 'us', 'next', 'year', 'recommend', 'movi', 'anyon', 'attract', 'livelihood', 'cultur', 'safe', 'say', 'movi', 'certainli', 'favorit', 'list', 'banana'], 0)
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'
warnings.warn("The parameter 'token_pattern' will not be used"
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
{'playboy', 'reincarn', 'spill', 'ghetto', '21st', 'weari', 'victorian'}
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
{'sophi', 'optimist', 'dubiou', 'masterson', 'banana', 'orchestr', 'omin'}
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
2020-10-16 21:53:57 Starting - Starting the training job...
2020-10-16 21:53:59 Starting - Launching requested ML instances...
2020-10-16 21:54:56 Starting - Preparing the instances for training...............
2020-10-16 21:57:20 Downloading - Downloading input data...
2020-10-16 21:57:43 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-10-16:21:58:05:INFO] Running standalone xgboost training.[0m
[34m[2020-10-16:21:58:05:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8491.48mb[0m
[34m[2020-10-16:21:58:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:58:05] S3DistributionType set as FullyReplicated[0m
[34m[21:58:07] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-10-16:21:58:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:58:07] S3DistributionType set as FullyReplicated[0m
[34m[21:58:08] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[21:58:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.2986#011validation-error:0.3004[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[21:58:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.2946#011validation-error:0.2989[0m
[34m[21:58:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.286067#011validation-error:0.2881[0m
[34m[21:58:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.280867#011validation-error:0.2843[0m
2020-10-16 21:58:04 Training - Training image download completed. Training in progress.[34m[21:58:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.272067#011validation-error:0.2777[0m
[34m[21:58:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.270333#011validation-error:0.2748[0m
[34m[21:58:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.2648#011validation-error:0.2704[0m
[34m[21:58:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.256467#011validation-error:0.2638[0m
[34m[21:58:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.248533#011validation-error:0.2556[0m
[34m[21:58:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.243533#011validation-error:0.2516[0m
[34m[21:58:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.239267#011validation-error:0.2489[0m
[34m[21:58:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.233533#011validation-error:0.2445[0m
[34m[21:58:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.231733#011validation-error:0.2404[0m
[34m[21:58:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.224933#011validation-error:0.2349[0m
[34m[21:58:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.220867#011validation-error:0.2346[0m
[34m[21:58:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.2152#011validation-error:0.228[0m
[34m[21:58:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.211667#011validation-error:0.2268[0m
[34m[21:58:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.204467#011validation-error:0.2217[0m
[34m[21:58:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.201067#011validation-error:0.2189[0m
[34m[21:58:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.199667#011validation-error:0.2192[0m
[34m[21:58:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.198533#011validation-error:0.2159[0m
[34m[21:58:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.1956#011validation-error:0.2142[0m
[34m[21:58:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.191267#011validation-error:0.2152[0m
[34m[21:58:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.188267#011validation-error:0.2116[0m
[34m[21:58:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.184933#011validation-error:0.2109[0m
[34m[21:58:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.1836#011validation-error:0.2093[0m
[34m[21:58:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.1814#011validation-error:0.2049[0m
[34m[21:58:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.180267#011validation-error:0.2036[0m
[34m[21:58:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.178133#011validation-error:0.2003[0m
[34m[21:58:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.1762#011validation-error:0.1996[0m
[34m[21:58:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.175333#011validation-error:0.198[0m
[34m[21:58:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.175333#011validation-error:0.1982[0m
[34m[21:58:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.174867#011validation-error:0.1968[0m
[34m[21:58:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.173#011validation-error:0.1967[0m
[34m[21:58:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.1724#011validation-error:0.196[0m
[34m[21:58:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.1718#011validation-error:0.1953[0m
[34m[21:58:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.169867#011validation-error:0.1946[0m
[34m[21:58:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.166467#011validation-error:0.1929[0m
[34m[21:59:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.1652#011validation-error:0.1934[0m
[34m[21:59:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.163067#011validation-error:0.1931[0m
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
...............................[32m2020-10-16T22:06:10.372:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34mArguments: serve[0m
[34m[2020-10-16 22:06:10 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-10-16 22:06:10 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-10-16 22:06:10 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-10-16 22:06:10 +0000] [37] [INFO] Booting worker with pid: 37[0m
[34m[2020-10-16 22:06:10 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 37[0m
[34m[2020-10-16 22:06:10 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-10-16 22:06:10 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-10-16:22:06:10:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:10:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:10:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:11:INFO] Sniff delimiter as ','[0m
[35mArguments: serve[0m
[35m[2020-10-16 22:06:10 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[35m[2020-10-16 22:06:10 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2020-10-16 22:06:10 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2020-10-16 22:06:10 +0000] [37] [INFO] Booting worker with pid: 37[0m
[35m[2020-10-16 22:06:10 +0000] [38] [INFO] Booting worker with pid: 38[0m
[35m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 37[0m
[35m[2020-10-16 22:06:10 +0000] [39] [INFO] Booting worker with pid: 39[0m
[35m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 38[0m
[35m[2020-10-16 22:06:10 +0000] [40] [INFO] Booting worker with pid: 40[0m
[35m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 39[0m
[35m[2020-10-16:22:06:10:INFO] Model loaded successfully for worker : 40[0m
[35m[2020-10-16:22:06:10:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:10:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:10:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:11:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:13:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:15:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:15:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:15:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:15:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:15:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:15:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:16:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:16:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:16:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:15:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:15:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:16:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:16:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:16:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:18:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:20:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:20:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:20:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:20:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:20:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:20:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:21:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:21:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:20:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:20:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:21:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:21:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:23:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:23:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:23:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:23:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:23:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:23:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:23:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:23:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[34m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-16:22:06:25:INFO] Sniff delimiter as ','[0m
[35m[2020-10-16:22:06:25:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/366.7 KiB (3.6 MiB/s) with 1 file(s) remaining
Completed 366.7 KiB/366.7 KiB (5.1 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-2-501357048875/xgboost-2020-10-16-22-01-12-265/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
---------------!
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-05-16 03:37:57-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 92.5MB/s in 0.9s
2020-05-16 03:37:58 (92.5 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([pd.DataFrame(val_y), pd.DataFrame(val_X)], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X)], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong. **We don't expect that the distribution has changed so much, that our new model should behave very poorly on old data.**To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.17.2)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (21.3)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (4.5.0)
Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.20.25)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: botocore<1.24.0,>=1.23.25 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.23.25)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.5.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.10.0.0)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7)
Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.16.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.24.0,>=1.23.25->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.24.0,>=1.23.25->boto3>=1.14.12->sagemaker==1.72.0) (1.26.5)
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
--2022-02-07 06:32:30-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.2MB/s in 4.5s
2022-02-07 06:32:35 (17.8 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Wrote preprocessed data to cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2022-02-07 07:02:08 Starting - Starting the training job...
2022-02-07 07:02:19 Starting - Launching requested ML instances......
2022-02-07 07:03:28 Starting - Preparing the instances for training.........
2022-02-07 07:04:59 Downloading - Downloading input data...
2022-02-07 07:05:20 Training - Downloading the training image.[34mArguments: train[0m
[34m[2022-02-07:07:05:42:INFO] Running standalone xgboost training.[0m
[34m[2022-02-07:07:05:42:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8362.86mb[0m
[34m[2022-02-07:07:05:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[07:05:42] S3DistributionType set as FullyReplicated[0m
[34m[07:05:44] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2022-02-07:07:05:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[07:05:44] S3DistributionType set as FullyReplicated[0m
[34m[07:05:45] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[07:05:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.296667#011validation-error:0.3032[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[07:05:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.281#011validation-error:0.2858[0m
[34m[07:05:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.277867#011validation-error:0.2831[0m
[34m[07:05:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.275933#011validation-error:0.2823[0m
[34m[07:05:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.269333#011validation-error:0.2795[0m
[34m[07:05:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.257667#011validation-error:0.2668[0m
[34m[07:05:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.2462#011validation-error:0.2608[0m
2022-02-07 07:05:41 Training - Training image download completed. Training in progress.[34m[07:06:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.2394#011validation-error:0.2539[0m
[34m[07:06:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.2304#011validation-error:0.2469[0m
[34m[07:06:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.2244#011validation-error:0.24[0m
[34m[07:06:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.220933#011validation-error:0.2371[0m
[34m[07:06:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.214667#011validation-error:0.2323[0m
[34m[07:06:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.2086#011validation-error:0.2247[0m
[34m[07:06:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.206267#011validation-error:0.2256[0m
[34m[07:06:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.204933#011validation-error:0.2242[0m
[34m[07:06:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.201333#011validation-error:0.2205[0m
[34m[07:06:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.1964#011validation-error:0.2165[0m
[34m[07:06:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.1954#011validation-error:0.2139[0m
[34m[07:06:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.1922#011validation-error:0.2119[0m
[34m[07:06:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.189667#011validation-error:0.2082[0m
[34m[07:06:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.185267#011validation-error:0.2058[0m
[34m[07:06:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.183#011validation-error:0.2022[0m
[34m[07:06:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.1816#011validation-error:0.2008[0m
[34m[07:06:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.1772#011validation-error:0.1998[0m
[34m[07:06:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.1748#011validation-error:0.1996[0m
[34m[07:06:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.1726#011validation-error:0.1981[0m
[34m[07:06:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.170533#011validation-error:0.1965[0m
[34m[07:06:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.1682#011validation-error:0.1947[0m
[34m[07:06:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.166533#011validation-error:0.1934[0m
[34m[07:06:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.164333#011validation-error:0.1914[0m
[34m[07:06:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.162133#011validation-error:0.1902[0m
[34m[07:06:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.161667#011validation-error:0.1888[0m
[34m[07:06:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.159533#011validation-error:0.1877[0m
[34m[07:06:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.1582#011validation-error:0.187[0m
[34m[07:06:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.1566#011validation-error:0.187[0m
[34m[07:06:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.1544#011validation-error:0.1856[0m
[34m[07:06:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.153467#011validation-error:0.1844[0m
[34m[07:06:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.152867#011validation-error:0.1818[0m
[34m[07:06:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.152133#011validation-error:0.1824[0m
[34m[07:06:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.152#011validation-error:0.1821[0m
[34m[07:06:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.149333#011validation-error:0.1799[0m
[34m[07:06:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.1486#011validation-error:0.1785[0m
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
.....................................[34mArguments: serve[0m
[34m[2022-02-07 07:15:55 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[34m[2022-02-07 07:15:55 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2022-02-07 07:15:55 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2022-02-07 07:15:55 +0000] [21] [INFO] Booting worker with pid: 21[0m
[34m[2022-02-07 07:15:55 +0000] [22] [INFO] Booting worker with pid: 22[0m
[34m[2022-02-07 07:15:55 +0000] [23] [INFO] Booting worker with pid: 23[0m
[34m[2022-02-07 07:15:55 +0000] [24] [INFO] Booting worker with pid: 24[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 21[0m
[34m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 22[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 23[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 24[0m
[35mArguments: serve[0m
[35m[2022-02-07 07:15:55 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[35m[2022-02-07 07:15:55 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2022-02-07 07:15:55 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2022-02-07 07:15:55 +0000] [21] [INFO] Booting worker with pid: 21[0m
[35m[2022-02-07 07:15:55 +0000] [22] [INFO] Booting worker with pid: 22[0m
[35m[2022-02-07 07:15:55 +0000] [23] [INFO] Booting worker with pid: 23[0m
[35m[2022-02-07 07:15:55 +0000] [24] [INFO] Booting worker with pid: 24[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 21[0m
[35m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 22[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 23[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2022-02-07:07:15:55:INFO] Model loaded successfully for worker : 24[0m
[34m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[32m2022-02-07T07:15:59.491:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:02:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:16:06:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:16:06:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:16:06:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:16:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:06:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:06:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:16:06:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:16:06:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/473.3 KiB (2.7 MiB/s) with 1 file(s) remaining
Completed 473.3 KiB/473.3 KiB (4.9 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-801008216402/xgboost-2022-02-07-07-09-58-725/test.csv.out to ../data/sentiment_update/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
..................................[34mArguments: serve[0m
[34m[2022-02-07 07:23:19 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[34m[2022-02-07 07:23:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2022-02-07 07:23:19 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2022-02-07 07:23:19 +0000] [21] [INFO] Booting worker with pid: 21[0m
[34m[2022-02-07 07:23:19 +0000] [22] [INFO] Booting worker with pid: 22[0m
[34m[2022-02-07 07:23:19 +0000] [23] [INFO] Booting worker with pid: 23[0m
[35mArguments: serve[0m
[35m[2022-02-07 07:23:19 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[35m[2022-02-07 07:23:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2022-02-07 07:23:19 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2022-02-07 07:23:19 +0000] [21] [INFO] Booting worker with pid: 21[0m
[35m[2022-02-07 07:23:19 +0000] [22] [INFO] Booting worker with pid: 22[0m
[35m[2022-02-07 07:23:19 +0000] [23] [INFO] Booting worker with pid: 23[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 23[0m
[34m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 21[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 22[0m
[34m[2022-02-07 07:23:19 +0000] [24] [INFO] Booting worker with pid: 24[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 24[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 23[0m
[35m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 21[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 22[0m
[35m[2022-02-07 07:23:19 +0000] [24] [INFO] Booting worker with pid: 24[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2022-02-07:07:23:19:INFO] Model loaded successfully for worker : 24[0m
[32m2022-02-07T07:23:23.182:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2022-02-07:07:23:26:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:23:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:23:26:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:23:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:23:26:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:23:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:23:26:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:23:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:23:27:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:23:27:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:23:26:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:23:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:23:26:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:23:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:23:27:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:23:27:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:23:30:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:23:30:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:23:30:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:23:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:23:30:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:23:30:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:23:30:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:23:30:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/473.1 KiB (2.8 MiB/s) with 1 file(s) remaining
Completed 473.1 KiB/473.1 KiB (4.9 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-801008216402/xgboost-2022-02-07-07-17-50-128/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
Using already existing model: xgboost-2022-02-07-07-02-08-668
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
(['robert', 'altman', 'downbeat', 'new', 'fangl', 'western', 'edmund', 'naughton', 'book', 'mccabe', 'overlook', 'time', 'releas', 'past', 'year', 'garner', 'sterl', 'critic', 'follow', 'asid', 'complet', 'convinc', 'boom', 'town', 'scenario', 'charact', 'merit', 'much', 'interest', 'pictur', 'look', 'intent', 'brackish', 'unapp', 'beard', 'warren', 'beatti', 'play', 'turn', 'centuri', 'entrepreneur', 'settl', 'struggl', 'commun', 'outskirt', 'nowher', 'help', 'organ', 'first', 'brothel', 'profit', 'start', 'come', 'beatti', 'natur', 'menac', 'citi', 'tough', 'want', 'part', 'action', 'altman', 'creat', 'solemn', 'wintri', 'atmospher', 'movi', 'give', 'audienc', 'certain', 'sens', 'time', 'place', 'action', 'sorri', 'littl', 'town', 'limit', 'stori', 'made', 'vignett', 'altman', 'pace', 'deliber', 'slow', 'hardli', 'statement', 'made', 'opposit', 'fact', 'languid', 'actor', 'stare', 'without', 'much', 'mind', 'self', 'defeat', 'pictur', 'yet', 'altman', 'quirki', 'way', 'wear', 'defeat', 'proudli'], 0)
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'
warnings.warn("The parameter 'token_pattern' will not be used"
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
{'ghetto', 'spill', 'victorian', '21st', 'reincarn', 'playboy', 'weari'}
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
{'dubiou', 'optimist', 'sophi', 'masterson', 'omin', 'banana', 'orchestr'}
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
2022-02-07 07:30:42 Starting - Starting the training job...
2022-02-07 07:30:45 Starting - Launching requested ML instances...........................
2022-02-07 07:35:23 Starting - Preparing the instances for training.........
2022-02-07 07:37:03 Downloading - Downloading input data...
2022-02-07 07:37:25 Training - Downloading the training image..[34mArguments: train[0m
[34m[2022-02-07:07:37:52:INFO] Running standalone xgboost training.[0m
[34m[2022-02-07:07:37:52:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8349.61mb[0m
[34m[2022-02-07:07:37:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[07:37:52] S3DistributionType set as FullyReplicated[0m
[34m[07:37:54] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2022-02-07:07:37:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[07:37:54] S3DistributionType set as FullyReplicated[0m
[34m[07:37:55] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[07:37:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.312067#011validation-error:0.319[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[07:38:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.2992#011validation-error:0.3055[0m
[34m[07:38:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.279467#011validation-error:0.2884[0m
2022-02-07 07:37:51 Training - Training image download completed. Training in progress.[34m[07:38:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.28#011validation-error:0.2891[0m
[34m[07:38:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.263867#011validation-error:0.271[0m
[34m[07:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.260467#011validation-error:0.2676[0m
[34m[07:38:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.253867#011validation-error:0.2629[0m
[34m[07:38:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.250267#011validation-error:0.2604[0m
[34m[07:38:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.2396#011validation-error:0.2462[0m
[34m[07:38:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.232067#011validation-error:0.2447[0m
[34m[07:38:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.225333#011validation-error:0.2392[0m
[34m[07:38:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.219867#011validation-error:0.2339[0m
[34m[07:38:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.217533#011validation-error:0.2309[0m
[34m[07:38:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.214733#011validation-error:0.2295[0m
[34m[07:38:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.208133#011validation-error:0.2253[0m
[34m[07:38:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.205867#011validation-error:0.2225[0m
[34m[07:38:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.202333#011validation-error:0.2196[0m
[34m[07:38:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.200267#011validation-error:0.2185[0m
[34m[07:38:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.198867#011validation-error:0.215[0m
[34m[07:38:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.197133#011validation-error:0.2129[0m
[34m[07:38:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.192933#011validation-error:0.2091[0m
[34m[07:38:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.1918#011validation-error:0.2077[0m
[34m[07:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.189667#011validation-error:0.2077[0m
[34m[07:38:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.1854#011validation-error:0.2058[0m
[34m[07:38:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.184#011validation-error:0.2065[0m
[34m[07:38:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.180667#011validation-error:0.2041[0m
[34m[07:38:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.178267#011validation-error:0.2029[0m
[34m[07:38:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.178267#011validation-error:0.2004[0m
[34m[07:38:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.177333#011validation-error:0.2007[0m
[34m[07:38:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.175533#011validation-error:0.201[0m
[34m[07:38:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.1732#011validation-error:0.1998[0m
[34m[07:38:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.1718#011validation-error:0.1989[0m
[34m[07:38:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.171#011validation-error:0.1961[0m
[34m[07:38:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.169#011validation-error:0.1958[0m
[34m[07:38:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.168067#011validation-error:0.1945[0m
[34m[07:38:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.167267#011validation-error:0.1943[0m
[34m[07:38:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.165533#011validation-error:0.194[0m
[34m[07:38:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.164467#011validation-error:0.1915[0m
[34m[07:38:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.163#011validation-error:0.1908[0m
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
.................................[34mArguments: serve[0m
[34m[2022-02-07 07:46:29 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[34m[2022-02-07 07:46:29 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2022-02-07 07:46:29 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2022-02-07 07:46:29 +0000] [21] [INFO] Booting worker with pid: 21[0m
[34m[2022-02-07 07:46:29 +0000] [22] [INFO] Booting worker with pid: 22[0m
[34m[2022-02-07 07:46:29 +0000] [23] [INFO] Booting worker with pid: 23[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:46:29:INFO] Model loaded successfully for worker : 21[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:46:29:INFO] Model loaded successfully for worker : 22[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:46:29:INFO] Model loaded successfully for worker : 23[0m
[34m[2022-02-07 07:46:29 +0000] [24] [INFO] Booting worker with pid: 24[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2022-02-07:07:46:30:INFO] Model loaded successfully for worker : 24[0m
[32m2022-02-07T07:46:33.777:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:37:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:41:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:41:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:44:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:45:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:45:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:45:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:45:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:45:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:44:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:44:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:45:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:45:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:45:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:45:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:52:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:52:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:52:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:46:53:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:46:53:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:52:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:52:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:52:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:46:53:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:46:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:00:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:03:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:03:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:04:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:03:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:03:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2022-02-07:07:47:04:INFO] Sniff delimiter as ','[0m
[35m[2022-02-07:07:47:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2022-02-07:07:47:04:INFO] Sniff delimiter as ','[0m
[34m[2022-02-07:07:47:04:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/470.1 KiB (2.6 MiB/s) with 1 file(s) remaining
Completed 470.1 KiB/470.1 KiB (4.6 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-801008216402/xgboost-2022-02-07-07-41-02-744/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
new_xgb_transformer.output_path
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
---------
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
#!rm -r $data_dir/*
# And then we delete the directory itself
#!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
#!rm -r $cache_dir/*
#!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
%%capture
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
# For sending text messages
!pip install twilio
import sys
sys.path.insert(0, '..')
from twilio_helper import twilio_helper
notebook_name = 'IMDB Sentiment Analysis - XGBoost (Updating a Model) solution.ipynb'
twilio_helper.send_text_message(f'{notebook_name}: test')
###Output
Text message sent: SM03b4147ef69b4debb2b24c43e08bc22a
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2021-04-20 15:03:16-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 24.1MB/s in 4.2s
2021-04-20 15:03:21 (19.0 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
# from sklearn.externals import joblib
import joblib
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2021-04-20 15:12:22 Starting - Starting the training job...
2021-04-20 15:12:24 Starting - Launching requested ML instances.........
2021-04-20 15:13:55 Starting - Preparing the instances for training......
2021-04-20 15:15:05 Downloading - Downloading input data...
2021-04-20 15:15:43 Training - Downloading the training image...
2021-04-20 15:16:04 Training - Training image download completed. Training in progress.[34mArguments: train[0m
[34m[2021-04-20:15:16:06:INFO] Running standalone xgboost training.[0m
[34m[2021-04-20:15:16:06:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8407.43mb[0m
[34m[2021-04-20:15:16:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[15:16:06] S3DistributionType set as FullyReplicated[0m
[34m[15:16:08] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-04-20:15:16:08:INFO] Determined delimiter of CSV input is ','[0m
[34m[15:16:08] S3DistributionType set as FullyReplicated[0m
[34m[15:16:09] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[15:16:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.2808#011validation-error:0.2982[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[15:16:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.27#011validation-error:0.2855[0m
[34m[15:16:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.268733#011validation-error:0.2818[0m
[34m[15:16:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.2642#011validation-error:0.279[0m
[34m[15:16:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.255#011validation-error:0.2725[0m
[34m[15:16:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.252267#011validation-error:0.2671[0m
[34m[15:16:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.244467#011validation-error:0.2634[0m
[34m[15:16:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.242133#011validation-error:0.2635[0m
[34m[15:16:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.2316#011validation-error:0.2536[0m
[34m[15:16:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.226733#011validation-error:0.2488[0m
[34m[15:16:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.220267#011validation-error:0.2419[0m
[34m[15:16:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.213467#011validation-error:0.2366[0m
[34m[15:16:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.207267#011validation-error:0.2297[0m
[34m[15:16:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.201867#011validation-error:0.2234[0m
[34m[15:16:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.199067#011validation-error:0.222[0m
[34m[15:16:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.198133#011validation-error:0.2222[0m
[34m[15:16:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.196267#011validation-error:0.2211[0m
[34m[15:16:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.193867#011validation-error:0.2204[0m
[34m[15:16:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.1894#011validation-error:0.2165[0m
[34m[15:16:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.186533#011validation-error:0.2134[0m
[34m[15:16:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.183933#011validation-error:0.211[0m
[34m[15:16:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.180333#011validation-error:0.2089[0m
[34m[15:16:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.179133#011validation-error:0.2073[0m
[34m[15:16:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.175733#011validation-error:0.2037[0m
[34m[15:16:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.171333#011validation-error:0.201[0m
[34m[15:16:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.167733#011validation-error:0.1996[0m
[34m[15:16:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.166#011validation-error:0.1982[0m
[34m[15:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.163933#011validation-error:0.1974[0m
[34m[15:16:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.163467#011validation-error:0.1949[0m
[34m[15:16:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.161733#011validation-error:0.193[0m
[34m[15:16:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.1612#011validation-error:0.1915[0m
[34m[15:16:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.158933#011validation-error:0.1921[0m
[34m[15:16:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.157133#011validation-error:0.1919[0m
[34m[15:16:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.1556#011validation-error:0.1893[0m
[34m[15:17:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.154467#011validation-error:0.1887[0m
[34m[15:17:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.153733#011validation-error:0.1879[0m
[34m[15:17:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.153133#011validation-error:0.1876[0m
[34m[15:17:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.152733#011validation-error:0.1856[0m
[34m[15:17:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.152067#011validation-error:0.1843[0m
[34m[15:17:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.150667#011validation-error:0.1824[0m
[34m[15:17:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.149067#011validation-error:0.1837[0m
[34m[15:17:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.146467#011validation-error:0.1829[0m
[34m[15:17:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[42]#011train-error:0.1454#011validation-error:0.1807[0m
[34m[15:17:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[43]#011train-error:0.144133#011validation-error:0.18[0m
[34m[15:17:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[44]#011train-error:0.1428#011validation-error:0.1808[0m
[34m[15:17:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-error:0.142#011validation-error:0.1796[0m
[34m[15:17:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[46]#011train-error:0.140133#011validation-error:0.1787[0m
[34m[15:17:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[47]#011train-error:0.138667#011validation-error:0.1772[0m
[34m[15:17:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[48]#011train-error:0.138667#011validation-error:0.1775[0m
[34m[15:17:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[49]#011train-error:0.138#011validation-error:0.1774[0m
[34m[15:17:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[50]#011train-error:0.1364#011validation-error:0.1774[0m
[34m[15:17:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[51]#011train-error:0.135933#011validation-error:0.1763[0m
[34m[15:17:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[52]#011train-error:0.134667#011validation-error:0.1749[0m
[34m[15:17:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[53]#011train-error:0.134467#011validation-error:0.1734[0m
[34m[15:17:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[54]#011train-error:0.133733#011validation-error:0.1724[0m
[34m[15:17:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[55]#011train-error:0.133#011validation-error:0.1728[0m
[34m[15:17:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[56]#011train-error:0.131267#011validation-error:0.1709[0m
[34m[15:17:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[57]#011train-error:0.1316#011validation-error:0.1708[0m
[34m[15:17:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[58]#011train-error:0.130133#011validation-error:0.1705[0m
[34m[15:17:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[59]#011train-error:0.129933#011validation-error:0.1706[0m
[34m[15:17:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[60]#011train-error:0.128333#011validation-error:0.1697[0m
[34m[15:17:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[61]#011train-error:0.1278#011validation-error:0.1698[0m
[34m[15:17:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[62]#011train-error:0.127467#011validation-error:0.1679[0m
[34m[15:17:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[63]#011train-error:0.127067#011validation-error:0.1672[0m
[34m[15:17:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[64]#011train-error:0.125933#011validation-error:0.1659[0m
[34m[15:17:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[65]#011train-error:0.124867#011validation-error:0.1665[0m
[34m[15:17:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[66]#011train-error:0.124733#011validation-error:0.1654[0m
[34m[15:17:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[67]#011train-error:0.1242#011validation-error:0.1659[0m
[34m[15:17:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[68]#011train-error:0.124#011validation-error:0.1648[0m
[34m[15:17:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[69]#011train-error:0.123467#011validation-error:0.1647[0m
[34m[15:17:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[70]#011train-error:0.122867#011validation-error:0.1645[0m
[34m[15:17:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[71]#011train-error:0.1228#011validation-error:0.1637[0m
[34m[15:17:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[72]#011train-error:0.121733#011validation-error:0.162[0m
[34m[15:17:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[73]#011train-error:0.122067#011validation-error:0.1622[0m
[34m[15:17:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[74]#011train-error:0.121467#011validation-error:0.1625[0m
[34m[15:17:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[75]#011train-error:0.1196#011validation-error:0.162[0m
[34m[15:17:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[76]#011train-error:0.118733#011validation-error:0.1606[0m
[34m[15:17:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[77]#011train-error:0.117467#011validation-error:0.1602[0m
[34m[15:18:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[78]#011train-error:0.117067#011validation-error:0.1603[0m
[34m[15:18:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[79]#011train-error:0.116333#011validation-error:0.1602[0m
[34m[15:18:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[80]#011train-error:0.1152#011validation-error:0.1597[0m
[34m[15:18:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[81]#011train-error:0.113733#011validation-error:0.1605[0m
[34m[15:18:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[82]#011train-error:0.112733#011validation-error:0.1603[0m
[34m[15:18:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[83]#011train-error:0.1126#011validation-error:0.1599[0m
[34m[15:18:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[84]#011train-error:0.1118#011validation-error:0.1598[0m
[34m[15:18:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[85]#011train-error:0.111533#011validation-error:0.1592[0m
[34m[15:18:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[86]#011train-error:0.1106#011validation-error:0.159[0m
[34m[15:18:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[87]#011train-error:0.110267#011validation-error:0.1579[0m
[34m[15:18:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[88]#011train-error:0.11#011validation-error:0.1577[0m
[34m[15:18:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[89]#011train-error:0.108933#011validation-error:0.1576[0m
[34m[15:18:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[90]#011train-error:0.108267#011validation-error:0.1572[0m
[34m[15:18:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[91]#011train-error:0.107933#011validation-error:0.1568[0m
[34m[15:18:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[92]#011train-error:0.108#011validation-error:0.1564[0m
[34m[15:18:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[93]#011train-error:0.108#011validation-error:0.156[0m
[34m[15:18:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[94]#011train-error:0.108067#011validation-error:0.1554[0m
[34m[15:18:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[95]#011train-error:0.108467#011validation-error:0.1555[0m
[34m[15:18:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[96]#011train-error:0.107933#011validation-error:0.1552[0m
[34m[15:18:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[97]#011train-error:0.108333#011validation-error:0.1544[0m
[34m[15:18:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[98]#011train-error:0.107533#011validation-error:0.155[0m
[34m[15:18:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[99]#011train-error:0.107#011validation-error:0.155[0m
[34m[15:18:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[100]#011train-error:0.106733#011validation-error:0.1544[0m
[34m[15:18:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[101]#011train-error:0.107#011validation-error:0.154[0m
[34m[15:18:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[102]#011train-error:0.106733#011validation-error:0.1543[0m
[34m[15:18:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[103]#011train-error:0.105533#011validation-error:0.1534[0m
[34m[15:18:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[104]#011train-error:0.105667#011validation-error:0.1531[0m
[34m[15:18:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[105]#011train-error:0.105333#011validation-error:0.1527[0m
[34m[15:18:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[106]#011train-error:0.105067#011validation-error:0.1526[0m
[34m[15:18:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[107]#011train-error:0.104533#011validation-error:0.1526[0m
[34m[15:18:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[108]#011train-error:0.104267#011validation-error:0.152[0m
[34m[15:18:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[109]#011train-error:0.1032#011validation-error:0.1524[0m
[34m[15:18:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[110]#011train-error:0.1026#011validation-error:0.1511[0m
[34m[15:18:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[111]#011train-error:0.103067#011validation-error:0.1502[0m
[34m[15:18:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[112]#011train-error:0.103067#011validation-error:0.1501[0m
[34m[15:18:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[113]#011train-error:0.102#011validation-error:0.1499[0m
[34m[15:18:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[114]#011train-error:0.101#011validation-error:0.1497[0m
[34m[15:18:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[115]#011train-error:0.100333#011validation-error:0.1496[0m
[34m[15:18:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[116]#011train-error:0.1004#011validation-error:0.1491[0m
[34m[15:18:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[117]#011train-error:0.100133#011validation-error:0.1501[0m
[34m[15:18:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[118]#011train-error:0.099867#011validation-error:0.1495[0m
[34m[15:18:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[119]#011train-error:0.0994#011validation-error:0.1487[0m
[34m[15:18:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[120]#011train-error:0.0986#011validation-error:0.1491[0m
[34m[15:19:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[121]#011train-error:0.098533#011validation-error:0.1486[0m
[34m[15:19:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[122]#011train-error:0.098333#011validation-error:0.1484[0m
[34m[15:19:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[123]#011train-error:0.097867#011validation-error:0.1484[0m
[34m[15:19:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[124]#011train-error:0.097#011validation-error:0.1487[0m
[34m[15:19:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[125]#011train-error:0.096733#011validation-error:0.1481[0m
[34m[15:19:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[126]#011train-error:0.097133#011validation-error:0.1483[0m
[34m[15:19:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[127]#011train-error:0.096867#011validation-error:0.1483[0m
[34m[15:19:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[128]#011train-error:0.0958#011validation-error:0.1477[0m
[34m[15:19:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[129]#011train-error:0.0954#011validation-error:0.1479[0m
[34m[15:19:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[130]#011train-error:0.094667#011validation-error:0.1479[0m
[34m[15:19:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[131]#011train-error:0.094667#011validation-error:0.1478[0m
[34m[15:19:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[132]#011train-error:0.094267#011validation-error:0.1469[0m
[34m[15:19:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[133]#011train-error:0.0936#011validation-error:0.1457[0m
[34m[15:19:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[134]#011train-error:0.093333#011validation-error:0.1454[0m
[34m[15:19:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[135]#011train-error:0.092867#011validation-error:0.1454[0m
[34m[15:19:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[136]#011train-error:0.092333#011validation-error:0.1459[0m
[34m[15:19:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[137]#011train-error:0.0918#011validation-error:0.1453[0m
[34m[15:19:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[138]#011train-error:0.091467#011validation-error:0.1454[0m
[34m[15:19:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[139]#011train-error:0.091867#011validation-error:0.1453[0m
[34m[15:19:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[140]#011train-error:0.091333#011validation-error:0.1452[0m
[34m[15:19:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[141]#011train-error:0.0912#011validation-error:0.1449[0m
[34m[15:19:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[142]#011train-error:0.091067#011validation-error:0.1449[0m
[34m[15:19:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[143]#011train-error:0.0912#011validation-error:0.1446[0m
[34m[15:19:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[144]#011train-error:0.090933#011validation-error:0.1442[0m
[34m[15:19:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[145]#011train-error:0.090867#011validation-error:0.1437[0m
[34m[15:19:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[146]#011train-error:0.090733#011validation-error:0.1435[0m
[34m[15:19:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[147]#011train-error:0.090333#011validation-error:0.1433[0m
[34m[15:19:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[148]#011train-error:0.0904#011validation-error:0.1432[0m
[34m[15:19:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[149]#011train-error:0.089867#011validation-error:0.1432[0m
[34m[15:19:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[150]#011train-error:0.089267#011validation-error:0.1427[0m
[34m[15:19:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[151]#011train-error:0.088933#011validation-error:0.1433[0m
[34m[15:19:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[152]#011train-error:0.0888#011validation-error:0.1437[0m
[34m[15:19:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[153]#011train-error:0.088533#011validation-error:0.1435[0m
[34m[15:19:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[154]#011train-error:0.088267#011validation-error:0.1431[0m
[34m[15:19:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[155]#011train-error:0.0878#011validation-error:0.143[0m
[34m[15:19:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[156]#011train-error:0.087867#011validation-error:0.1429[0m
[34m[15:19:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[157]#011train-error:0.088133#011validation-error:0.1426[0m
[34m[15:19:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[158]#011train-error:0.0882#011validation-error:0.1426[0m
[34m[15:19:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[159]#011train-error:0.087867#011validation-error:0.1418[0m
[34m[15:19:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[160]#011train-error:0.088#011validation-error:0.1413[0m
[34m[15:19:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[161]#011train-error:0.0878#011validation-error:0.1407[0m
[34m[15:19:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[162]#011train-error:0.086933#011validation-error:0.1412[0m
[34m[15:19:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[163]#011train-error:0.086867#011validation-error:0.141[0m
[34m[15:20:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[164]#011train-error:0.086667#011validation-error:0.1413[0m
[34m[15:20:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[165]#011train-error:0.086267#011validation-error:0.1414[0m
[34m[15:20:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[166]#011train-error:0.085867#011validation-error:0.1418[0m
[34m[15:20:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[167]#011train-error:0.0858#011validation-error:0.1419[0m
[34m[15:20:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[168]#011train-error:0.085067#011validation-error:0.1412[0m
[34m[15:20:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[169]#011train-error:0.085467#011validation-error:0.1414[0m
[34m[15:20:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[170]#011train-error:0.085067#011validation-error:0.1417[0m
[34m[15:20:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[171]#011train-error:0.085067#011validation-error:0.1414[0m
[34mStopping. Best iteration:[0m
[34m[161]#011train-error:0.0878#011validation-error:0.1407
[0m
2021-04-20 15:21:13 Uploading - Uploading generated training model
2021-04-20 15:21:13 Completed - Training job completed
Training seconds: 368
Billable seconds: 368
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
................................[34mArguments: serve[0m
[34m[2021-04-20 15:26:45 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[34m[2021-04-20 15:26:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2021-04-20 15:26:45 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2021-04-20 15:26:45 +0000] [21] [INFO] Booting worker with pid: 21[0m
[34m[2021-04-20 15:26:45 +0000] [22] [INFO] Booting worker with pid: 22[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35mArguments: serve[0m
[35m[2021-04-20 15:26:45 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[35m[2021-04-20 15:26:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2021-04-20 15:26:45 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2021-04-20 15:26:45 +0000] [21] [INFO] Booting worker with pid: 21[0m
[35m[2021-04-20 15:26:45 +0000] [22] [INFO] Booting worker with pid: 22[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 21[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 22[0m
[34m[2021-04-20 15:26:45 +0000] [23] [INFO] Booting worker with pid: 23[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 23[0m
[34m[2021-04-20 15:26:45 +0000] [24] [INFO] Booting worker with pid: 24[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 24[0m
[35m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 21[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 22[0m
[35m[2021-04-20 15:26:45 +0000] [23] [INFO] Booting worker with pid: 23[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 23[0m
[35m[2021-04-20 15:26:45 +0000] [24] [INFO] Booting worker with pid: 24[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:26:45:INFO] Model loaded successfully for worker : 24[0m
[32m2021-04-20T15:26:49.356:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2021-04-20:15:26:53:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:26:53:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:26:53:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:53:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:53:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:53:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:53:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:53:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:53:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:53:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:26:56:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:26:56:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:26:56:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:56:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:56:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:56:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:26:59:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:26:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:26:59:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:26:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:02:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:02:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:03:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:03:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:02:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:02:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:03:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:03:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:06:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:06:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:06:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:06:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:06:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:07:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:06:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:07:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:10:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:10:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:18:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:18:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:25:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:25:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:25:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:25:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:25:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:25:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:26:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:25:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:25:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:26:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:29:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:27:29:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:27:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:29:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:27:29:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:27:29:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-us-east-1-616964915547/xgboost-2021-04-20-15-21-40-014/test.csv.out to ../data/sentiment_update/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
...............................[34mArguments: serve[0m
[34m[2021-04-20 15:33:25 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[34m[2021-04-20 15:33:25 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2021-04-20 15:33:25 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2021-04-20 15:33:25 +0000] [20] [INFO] Booting worker with pid: 20[0m
[35mArguments: serve[0m
[35m[2021-04-20 15:33:25 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[35m[2021-04-20 15:33:25 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[35m[2021-04-20 15:33:25 +0000] [1] [INFO] Using worker: gevent[0m
[35m[2021-04-20 15:33:25 +0000] [20] [INFO] Booting worker with pid: 20[0m
[34m[2021-04-20 15:33:25 +0000] [21] [INFO] Booting worker with pid: 21[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 20[0m
[34m[2021-04-20 15:33:25 +0000] [22] [INFO] Booting worker with pid: 22[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 21[0m
[34m[2021-04-20 15:33:25 +0000] [23] [INFO] Booting worker with pid: 23[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 22[0m
[35m[2021-04-20 15:33:25 +0000] [21] [INFO] Booting worker with pid: 21[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 20[0m
[35m[2021-04-20 15:33:25 +0000] [22] [INFO] Booting worker with pid: 22[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 21[0m
[35m[2021-04-20 15:33:25 +0000] [23] [INFO] Booting worker with pid: 23[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 22[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 23[0m
[35m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[35m[2021-04-20:15:33:25:INFO] Model loaded successfully for worker : 23[0m
[32m2021-04-20T15:33:29.234:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2021-04-20:15:33:32:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:32:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:33:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:33:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:32:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:32:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:33:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:33:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:36:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:36:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:36:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:37:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:37:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:36:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:36:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:36:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:37:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:37:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:40:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:40:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:44:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:44:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:44:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:44:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:44:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:44:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:44:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:47:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:47:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:47:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:48:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:47:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:48:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:48:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:48:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:48:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:48:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:51:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:51:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:51:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:52:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:52:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:51:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:52:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:52:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:52:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:52:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:55:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:55:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:55:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:56:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:33:56:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:33:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:55:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:56:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:33:56:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:33:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:03:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:03:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:03:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:03:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:03:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:03:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:04:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:04:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:04:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:04:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:07:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:07:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:07:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:34:08:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:34:08:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:07:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:34:08:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:34:08:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-us-east-1-616964915547/xgboost-2021-04-20-15-28-26-247/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
Using already existing model: xgboost-2021-04-20-15-12-22-330
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
(['compel', 'piec', 'low', 'budget', 'horror', 'rel', 'origin', 'premis', 'cast', 'fill', 'familiar', 'face', 'one', 'convinc', 'film', 'locat', 'histori', 'horror', 'film', 'could', 'anyon', 'pleas', 'tell', 'movi', 'utterli', 'underr', 'prison', 'finnish', 'director', 'harlin', 'american', 'debut', 'still', 'count', 'best', 'effort', 'even', 'though', 'went', 'make', 'blockbust', 'hit', 'like', 'die', 'hard', '2', 'cliffhang', 'deep', 'blue', 'sea', 'stori', 'entir', 'take', 'place', 'ancient', 'ramshackl', 'wyom', 'prison', 'open', 'caus', 'popul', 'modern', 'state', 'penitentiari', 'insid', 'former', 'execut', 'dungeon', 'restless', 'spirit', 'electr', 'chair', 'last', 'victim', 'still', 'dwell', 'around', 'promot', 'warden', 'eaton', 'sharp', 'lane', 'smith', 'alreadi', '40', 'year', 'ago', 'innoc', 'man', 'put', 'death', 'spirit', 'still', 'rememb', 'vile', 'role', 'unfair', 'trial', 'seem', 'time', 'vengeanc', 'final', 'arriv', 'viggo', 'mortensen', 'play', 'good', 'car', 'thief', 'prevent', 'even', 'larger', 'bodi', 'count', 'chelsea', 'field', 'human', 'social', 'worker', 'slowli', 'unravel', 'secret', 'past', 'prison', 'contain', 'half', 'dozen', 'memor', 'gore', 'sequenc', 'unbear', 'tens', 'atmospher', 'stick', 'certain', 'unlik', 'horror', 'pictur', 'decad', 'prison', 'featur', 'amaz', 'sens', 'realism', 'refer', 'authent', 'sceneri', 'mood', 'insid', 'prison', 'wall', 'cours', 'toward', 'supernatur', 'murder', 'commit', 'even', 'though', 'genuin', 'unsettl', 'well', 'film', 'best', 'part', 'imag', 'realist', 'tough', 'prison', 'drama', 'sequenc', 'combin', 'visual', 'mayhem', 'shock', 'horror', 'absolut', 'best', 'terror', 'moment', 'provid', 'nightmar', 'ever', 'sinc', 'saw', 'rather', 'young', 'age', 'focus', 'grizzli', 'death', 'struggl', 'involv', 'barb', 'wire', 'haunt', 'screenplay', 'suffer', 'one', 'flaw', 'common', 'one', 'almost', 'inevit', 'guess', 'clich', 'stori', 'introduc', 'nearli', 'everi', 'possibl', 'stereotyp', 'prison', 'surround', 'got', 'ugli', 'fat', 'pervert', 'cute', 'boy', 'toy', 'cowardli', 'racist', 'guard', 'avoid', 'confront', 'cost', 'natur', 'old', 'n', 'wise', 'black', 'con', 'serv', 'lifetim', 'hear', 'anybodi', 'yell', 'name', 'morgan', 'freeman', 'stare', 'blind', 'clich', 'advis', 'mani', 'element', 'admir', 'photographi', 'dark', 'moist', 'mysteri', 'upheld', 'long', 'success', 'support', 'inmat', 'role', 'class', 'b', 'actor', 'excel', 'fan', 'recogn', 'tom', 'everett', 'tom', 'tini', 'lister', 'even', 'immort', 'horror', 'icon', 'kane', 'hodder', 'forget', 'we', 'craven', 'god', 'aw', 'attempt', 'shocker', 'downright', 'pathet', 'chees', 'flick', 'chair', 'prison', 'chiller', 'worth', 'track', 'especi', 'consid', 'viggo', 'mortensen', 'peak', 'popular', 'nowaday', 'heard', 'star', 'success', 'franchis', 'involv', 'elv', 'hobbit', 'fairi', 'creatur', 'true', '80', 'horror', 'gem', 'ought', 'get', 'urgent', 'dvd', 'releas'], 1)
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'
warnings.warn("The parameter 'token_pattern' will not be used"
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
{'21st', 'playboy', 'spill', 'weari', 'ghetto', 'victorian', 'reincarn'}
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
{'banana', 'optimist', 'masterson', 'orchestr', 'sophi', 'omin', 'dubiou'}
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
2021-04-20 15:44:08 Starting - Starting the training job...
2021-04-20 15:44:10 Starting - Launching requested ML instances.........
2021-04-20 15:45:54 Starting - Preparing the instances for training......
2021-04-20 15:46:56 Downloading - Downloading input data...
2021-04-20 15:47:28 Training - Downloading the training image...
2021-04-20 15:47:52 Training - Training image download completed. Training in progress.[34mArguments: train[0m
[34m[2021-04-20:15:47:54:INFO] Running standalone xgboost training.[0m
[34m[2021-04-20:15:47:54:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8413.62mb[0m
[34m[2021-04-20:15:47:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[15:47:54] S3DistributionType set as FullyReplicated[0m
[34m[15:47:56] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2021-04-20:15:47:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[15:47:56] S3DistributionType set as FullyReplicated[0m
[34m[15:47:57] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[15:48:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.2986#011validation-error:0.3128[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[15:48:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.283067#011validation-error:0.2984[0m
[34m[15:48:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.279267#011validation-error:0.2916[0m
[34m[15:48:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.270667#011validation-error:0.2824[0m
[34m[15:48:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.270733#011validation-error:0.2821[0m
[34m[15:48:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.2648#011validation-error:0.2752[0m
[34m[15:48:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.254667#011validation-error:0.2676[0m
[34m[15:48:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.251933#011validation-error:0.2655[0m
[34m[15:48:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.248067#011validation-error:0.2631[0m
[34m[15:48:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.2352#011validation-error:0.2505[0m
[34m[15:48:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.233#011validation-error:0.2482[0m
[34m[15:48:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.228667#011validation-error:0.2433[0m
[34m[15:48:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.225333#011validation-error:0.2403[0m
[34m[15:48:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.2214#011validation-error:0.2347[0m
[34m[15:48:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.214667#011validation-error:0.2335[0m
[34m[15:48:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.210333#011validation-error:0.2291[0m
[34m[15:48:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.2056#011validation-error:0.2249[0m
[34m[15:48:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.202267#011validation-error:0.2217[0m
[34m[15:48:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.201467#011validation-error:0.2217[0m
[34m[15:48:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.197267#011validation-error:0.2202[0m
[34m[15:48:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.193133#011validation-error:0.2183[0m
[34m[15:48:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.191667#011validation-error:0.2164[0m
[34m[15:48:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.190267#011validation-error:0.2153[0m
[34m[15:48:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.1878#011validation-error:0.2147[0m
[34m[15:48:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.187067#011validation-error:0.2129[0m
[34m[15:48:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.186133#011validation-error:0.212[0m
[34m[15:48:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.183267#011validation-error:0.2102[0m
[34m[15:48:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.179333#011validation-error:0.2089[0m
[34m[15:48:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.179133#011validation-error:0.209[0m
[34m[15:48:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.177067#011validation-error:0.2049[0m
[34m[15:48:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.1746#011validation-error:0.2039[0m
[34m[15:48:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.174067#011validation-error:0.2027[0m
[34m[15:48:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.173533#011validation-error:0.2027[0m
[34m[15:48:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.172733#011validation-error:0.2023[0m
[34m[15:48:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.1716#011validation-error:0.2017[0m
[34m[15:48:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.169133#011validation-error:0.2016[0m
[34m[15:48:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.1676#011validation-error:0.2005[0m
[34m[15:48:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.1658#011validation-error:0.1985[0m
[34m[15:48:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.1652#011validation-error:0.1975[0m
[34m[15:48:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.163533#011validation-error:0.1975[0m
[34m[15:49:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.161667#011validation-error:0.1963[0m
[34m[15:49:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.1614#011validation-error:0.1953[0m
[34m[15:49:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[42]#011train-error:0.159133#011validation-error:0.1952[0m
[34m[15:49:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[43]#011train-error:0.157733#011validation-error:0.1958[0m
[34m[15:49:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[44]#011train-error:0.156067#011validation-error:0.1919[0m
[34m[15:49:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-error:0.155733#011validation-error:0.1917[0m
[34m[15:49:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[46]#011train-error:0.154733#011validation-error:0.1916[0m
[34m[15:49:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[47]#011train-error:0.153667#011validation-error:0.1903[0m
[34m[15:49:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[48]#011train-error:0.152533#011validation-error:0.1898[0m
[34m[15:49:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[49]#011train-error:0.1516#011validation-error:0.1893[0m
[34m[15:49:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[50]#011train-error:0.150933#011validation-error:0.1887[0m
[34m[15:49:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[51]#011train-error:0.150867#011validation-error:0.1893[0m
[34m[15:49:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[52]#011train-error:0.149733#011validation-error:0.1888[0m
[34m[15:49:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[53]#011train-error:0.1496#011validation-error:0.1895[0m
[34m[15:49:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[54]#011train-error:0.1472#011validation-error:0.1895[0m
[34m[15:49:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[55]#011train-error:0.146667#011validation-error:0.1898[0m
[34m[15:49:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[56]#011train-error:0.146533#011validation-error:0.1896[0m
[34m[15:49:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[57]#011train-error:0.1462#011validation-error:0.1887[0m
[34m[15:49:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[58]#011train-error:0.145267#011validation-error:0.1889[0m
[34m[15:49:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[59]#011train-error:0.144067#011validation-error:0.1892[0m
[34m[15:49:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[60]#011train-error:0.142267#011validation-error:0.1871[0m
[34m[15:49:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[61]#011train-error:0.142533#011validation-error:0.1876[0m
[34m[15:49:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[62]#011train-error:0.141333#011validation-error:0.1872[0m
[34m[15:49:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[63]#011train-error:0.1414#011validation-error:0.1874[0m
[34m[15:49:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[64]#011train-error:0.141333#011validation-error:0.1872[0m
[34m[15:49:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[65]#011train-error:0.140867#011validation-error:0.1876[0m
[34m[15:49:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[66]#011train-error:0.14#011validation-error:0.1872[0m
[34m[15:49:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[67]#011train-error:0.139067#011validation-error:0.1876[0m
[34m[15:49:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[68]#011train-error:0.1386#011validation-error:0.1856[0m
[34m[15:49:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[69]#011train-error:0.1384#011validation-error:0.186[0m
[34m[15:49:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[70]#011train-error:0.136267#011validation-error:0.1875[0m
[34m[15:49:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[71]#011train-error:0.135867#011validation-error:0.1877[0m
[34m[15:49:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[72]#011train-error:0.135133#011validation-error:0.1868[0m
[34m[15:49:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[73]#011train-error:0.134#011validation-error:0.1854[0m
[34m[15:49:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[74]#011train-error:0.132733#011validation-error:0.1852[0m
[34m[15:49:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[75]#011train-error:0.132533#011validation-error:0.1852[0m
[34m[15:49:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[76]#011train-error:0.1322#011validation-error:0.1844[0m
[34m[15:49:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[77]#011train-error:0.1316#011validation-error:0.1844[0m
[34m[15:49:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[78]#011train-error:0.131733#011validation-error:0.1848[0m
[34m[15:49:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[79]#011train-error:0.130533#011validation-error:0.183[0m
[34m[15:49:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[80]#011train-error:0.1296#011validation-error:0.182[0m
[34m[15:49:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[81]#011train-error:0.128467#011validation-error:0.18[0m
[34m[15:50:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[82]#011train-error:0.127133#011validation-error:0.1809[0m
[34m[15:50:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[83]#011train-error:0.127#011validation-error:0.1801[0m
[34m[15:50:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[84]#011train-error:0.1268#011validation-error:0.1801[0m
[34m[15:50:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[85]#011train-error:0.125867#011validation-error:0.1793[0m
[34m[15:50:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[86]#011train-error:0.124933#011validation-error:0.1809[0m
[34m[15:50:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[87]#011train-error:0.1254#011validation-error:0.181[0m
[34m[15:50:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[88]#011train-error:0.125267#011validation-error:0.1813[0m
[34m[15:50:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[89]#011train-error:0.125333#011validation-error:0.1821[0m
[34m[15:50:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[90]#011train-error:0.1244#011validation-error:0.1804[0m
[34m[15:50:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[91]#011train-error:0.124#011validation-error:0.1812[0m
[34m[15:50:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[92]#011train-error:0.1234#011validation-error:0.1818[0m
[34m[15:50:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[93]#011train-error:0.122733#011validation-error:0.182[0m
[34m[15:50:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[94]#011train-error:0.121467#011validation-error:0.182[0m
[34m[15:50:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[95]#011train-error:0.1214#011validation-error:0.1809[0m
[34mStopping. Best iteration:[0m
[34m[85]#011train-error:0.125867#011validation-error:0.1793
[0m
2021-04-20 15:50:29 Uploading - Uploading generated training model
2021-04-20 15:50:29 Completed - Training job completed
Training seconds: 213
Billable seconds: 213
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
........................................[34mArguments: serve[0m
[34m[2021-04-20 15:57:26 +0000] [1] [INFO] Starting gunicorn 19.9.0[0m
[34m[2021-04-20 15:57:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2021-04-20 15:57:26 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2021-04-20 15:57:26 +0000] [21] [INFO] Booting worker with pid: 21[0m
[34m[2021-04-20 15:57:26 +0000] [22] [INFO] Booting worker with pid: 22[0m
[34m[2021-04-20 15:57:26 +0000] [23] [INFO] Booting worker with pid: 23[0m
[34m[2021-04-20 15:57:26 +0000] [24] [INFO] Booting worker with pid: 24[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:57:26:INFO] Model loaded successfully for worker : 22[0m
[34m[2021-04-20:15:57:26:INFO] Model loaded successfully for worker : 21[0m
[34m[2021-04-20:15:57:26:INFO] Model loaded successfully for worker : 23[0m
[34m/opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)'].
monkey.patch_all(subprocess=True)[0m
[34m[2021-04-20:15:57:26:INFO] Model loaded successfully for worker : 24[0m
[32m2021-04-20T15:57:30.081:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2021-04-20:15:57:32:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:32:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:32:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:32:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:32:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:32:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:32:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:34:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:34:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:34:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:36:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:36:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:38:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:41:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:41:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:42:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:41:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:41:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:42:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:42:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:42:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:42:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:42:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:42:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:42:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:45:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:45:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:45:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:46:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:45:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:45:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:46:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:46:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:46:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:49:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:57:57:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:57:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:01:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:01:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:01:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:01:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:05:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:08:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:08:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:08:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:08:INFO] Determined delimiter of CSV input is ','[0m
[34m[2021-04-20:15:58:08:INFO] Sniff delimiter as ','[0m
[34m[2021-04-20:15:58:08:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:08:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:08:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:08:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:08:INFO] Determined delimiter of CSV input is ','[0m
[35m[2021-04-20:15:58:08:INFO] Sniff delimiter as ','[0m
[35m[2021-04-20:15:58:08:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-us-east-1-616964915547/xgboost-2021-04-20-15-50-53-477/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
len(test_X[0])
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
---------------!
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# ### Similarly we will remove the files in the cache_dir directory and the directory itself
# !rm $cache_dir/*
# !rmdir $cache_dir
twilio_helper.send_text_message(f'{notebook_name}: running all cells finished')
###Output
Text message sent: SM384679a4a6f64e91acf0e571dfac3bd1
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
###Code
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
###Output
_____no_output_____
###Markdown
Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
--2020-05-19 11:28:05-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 22.3MB/s in 5.2s
2020-05-19 11:28:10 (15.5 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Wrote preprocessed data to cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
whos
###Output
Variable Type Data/Info
---------------------------------------------------------------
BeautifulSoup type <class 'bs4.BeautifulSoup'>
CountVectorizer type <class 'sklearn.feature_e<...>on.text.CountVectorizer'>
PorterStemmer ABCMeta <class 'nltk.stem.porter.PorterStemmer'>
StemmerI ABCMeta <class 'nltk.stem.api.StemmerI'>
accuracy_score function <function accuracy_score at 0x7fbadcdd5d90>
cache_dir str ../cache/sentiment_analysis
container str 811284229777.dkr.ecr.us-e<...>1.amazonaws.com/xgboost:1
data dict n=2
data_dir str ../data/sentiment_update
demo function <function demo at 0x7fbadb1c58c8>
extract_BoW_features function <function extract_BoW_features at 0x7fbac29e0158>
get_execution_role function <function get_execution_role at 0x7fbacc132ea0>
get_image_uri function <function get_image_uri at 0x7fbad6a49a60>
glob module <module 'glob' from '/hom<...>6/lib/python3.6/glob.py'>
joblib module <module 'sklearn.external<...>nals/joblib/__init__.py'>
labels dict n=2
new_X list n=25000
new_XV ndarray 25000x5000: 125000000 elems, type `int64`, 1000000000 bytes (953.67431640625 Mb)
new_Y list n=25000
new_data module <module 'new_data' from '<...>ni-Projects/new_data.py'>
nltk module <module 'nltk' from '/hom<...>ckages/nltk/__init__.py'>
np module <module 'numpy' from '/ho<...>kages/numpy/__init__.py'>
os module <module 'os' from '/home/<...>p36/lib/python3.6/os.py'>
pd module <module 'pandas' from '/h<...>ages/pandas/__init__.py'>
pickle module <module 'pickle' from '/h<...>lib/python3.6/pickle.py'>
predictions list n=25000
prefix str sentiment-update
prepare_imdb_data function <function prepare_imdb_data at 0x7fbb02766620>
preprocess_data function <function preprocess_data at 0x7fbadf2b2268>
print_function _Feature _Feature((2, 6, 0, 'alpha<...>0, 0, 'alpha', 0), 65536)
python_2_unicode_compatible function <function python_2_unicod<...>atible at 0x7fbae55a6ae8>
re module <module 're' from '/home/<...>p36/lib/python3.6/re.py'>
read_imdb_data function <function read_imdb_data at 0x7fbb027668c8>
review_to_words function <function review_to_words at 0x7fbadaeedbf8>
role str arn:aws:iam::133790504590<...>utionRole-20200517T231641
s3_input_train s3_input <sagemaker.inputs.s3_inpu<...>object at 0x7fbad755c048>
s3_input_validation s3_input <sagemaker.inputs.s3_inpu<...>object at 0x7fbad755cc18>
sagemaker module <module 'sagemaker' from <...>s/sagemaker/__init__.py'>
session Session <sagemaker.session.Sessio<...>object at 0x7fbad7449da0>
shuffle function <function shuffle at 0x7fbae557c6a8>
stemmer PorterStemmer <PorterStemmer>
stopwords WordListCorpusReader <WordListCorpusReader in <...>_data/corpora/stopwords'>
test_X NoneType None
test_location str s3://sagemaker-us-east-1-<...>sentiment-update/test.csv
test_y list n=25000
train_X NoneType None
train_location str s3://sagemaker-us-east-1-<...>entiment-update/train.csv
train_y NoneType None
unicode_literals _Feature _Feature((2, 6, 0, 'alpha<...>, 0, 'alpha', 0), 131072)
val_X NoneType None
val_location str s3://sagemaker-us-east-1-<...>ent-update/validation.csv
val_y NoneType None
vectorizer CountVectorizer CountVectorizer(analyzer=<...>: 1932, 'aborigin': 102})
vocabulary dict n=5000
xgb Estimator <sagemaker.estimator.Esti<...>object at 0x7fbad755cdd8>
xgb_transformer Transformer <sagemaker.transformer.Tr<...>object at 0x7fbad75a0358>
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2020-05-19 12:30:44 Starting - Starting the training job...
2020-05-19 12:30:50 Starting - Launching requested ML instances.........
2020-05-19 12:32:34 Starting - Preparing the instances for training......
2020-05-19 12:33:37 Downloading - Downloading input data...
2020-05-19 12:34:13 Training - Training image download completed. Training in progress..[34mArguments: train[0m
[34m[2020-05-19:12:34:13:INFO] Running standalone xgboost training.[0m
[34m[2020-05-19:12:34:13:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8462.64mb[0m
[34m[2020-05-19:12:34:13:INFO] Determined delimiter of CSV input is ','[0m
[34m[12:34:13] S3DistributionType set as FullyReplicated[0m
[34m[12:34:15] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-19:12:34:15:INFO] Determined delimiter of CSV input is ','[0m
[34m[12:34:15] S3DistributionType set as FullyReplicated[0m
[34m[12:34:16] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[12:34:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.2984#011validation-error:0.303[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[12:34:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.281133#011validation-error:0.288[0m
[34m[12:34:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.265867#011validation-error:0.2768[0m
[34m[12:34:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.262467#011validation-error:0.2739[0m
[34m[12:34:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.2566#011validation-error:0.2697[0m
[34m[12:34:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.2506#011validation-error:0.2645[0m
[34m[12:34:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.244267#011validation-error:0.2597[0m
[34m[12:34:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.231267#011validation-error:0.2521[0m
[34m[12:34:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.223867#011validation-error:0.2438[0m
[34m[12:34:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.220467#011validation-error:0.2394[0m
[34m[12:34:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.214733#011validation-error:0.2328[0m
[34m[12:34:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.212667#011validation-error:0.2317[0m
[34m[12:34:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.2104#011validation-error:0.2295[0m
[34m[12:34:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.208533#011validation-error:0.2266[0m
[34m[12:34:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.206#011validation-error:0.2245[0m
[34m[12:34:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.201933#011validation-error:0.2181[0m
[34m[12:34:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.199667#011validation-error:0.2167[0m
[34m[12:34:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.194933#011validation-error:0.2138[0m
[34m[12:34:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.1912#011validation-error:0.2117[0m
[34m[12:34:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.1874#011validation-error:0.2085[0m
[34m[12:34:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.184333#011validation-error:0.2072[0m
[34m[12:34:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.181133#011validation-error:0.2048[0m
[34m[12:34:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.178333#011validation-error:0.2018[0m
[34m[12:34:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.1756#011validation-error:0.2005[0m
[34m[12:34:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.174267#011validation-error:0.1982[0m
[34m[12:34:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.172067#011validation-error:0.1951[0m
[34m[12:34:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.1696#011validation-error:0.1955[0m
[34m[12:34:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.165067#011validation-error:0.1938[0m
[34m[12:34:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.163#011validation-error:0.1911[0m
[34m[12:34:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.1624#011validation-error:0.1875[0m
[34m[12:34:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.159067#011validation-error:0.187[0m
[34m[12:35:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.157267#011validation-error:0.1863[0m
[34m[12:35:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.155867#011validation-error:0.1849[0m
[34m[12:35:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.154267#011validation-error:0.185[0m
[34m[12:35:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.1538#011validation-error:0.1831[0m
[34m[12:35:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.1528#011validation-error:0.1825[0m
[34m[12:35:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.151867#011validation-error:0.1827[0m
[34m[12:35:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.1514#011validation-error:0.1815[0m
[34m[12:35:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.1492#011validation-error:0.1804[0m
[34m[12:35:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.149#011validation-error:0.18[0m
[34m[12:35:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.147067#011validation-error:0.179[0m
[34m[12:35:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.146133#011validation-error:0.1786[0m
[34m[12:35:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[42]#011train-error:0.144933#011validation-error:0.1779[0m
[34m[12:35:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[43]#011train-error:0.144933#011validation-error:0.1757[0m
[34m[12:35:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[44]#011train-error:0.143467#011validation-error:0.175[0m
[34m[12:35:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[45]#011train-error:0.141933#011validation-error:0.1744[0m
[34m[12:35:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[46]#011train-error:0.1406#011validation-error:0.1733[0m
[34m[12:35:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[47]#011train-error:0.1392#011validation-error:0.1738[0m
[34m[12:35:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[48]#011train-error:0.138467#011validation-error:0.1741[0m
[34m[12:35:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[49]#011train-error:0.137533#011validation-error:0.1724[0m
[34m[12:35:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[50]#011train-error:0.1374#011validation-error:0.1707[0m
[34m[12:35:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[51]#011train-error:0.137533#011validation-error:0.1704[0m
[34m[12:35:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[52]#011train-error:0.137#011validation-error:0.1708[0m
[34m[12:35:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[53]#011train-error:0.135467#011validation-error:0.1702[0m
[34m[12:35:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[54]#011train-error:0.1348#011validation-error:0.1692[0m
[34m[12:35:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[55]#011train-error:0.133933#011validation-error:0.1689[0m
[34m[12:35:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[56]#011train-error:0.133667#011validation-error:0.1681[0m
[34m[12:35:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[57]#011train-error:0.131733#011validation-error:0.168[0m
[34m[12:35:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[58]#011train-error:0.131667#011validation-error:0.1679[0m
[34m[12:35:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[59]#011train-error:0.130533#011validation-error:0.1672[0m
[34m[12:35:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[60]#011train-error:0.1302#011validation-error:0.1659[0m
[34m[12:35:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[61]#011train-error:0.129867#011validation-error:0.1661[0m
[34m[12:35:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[62]#011train-error:0.1294#011validation-error:0.1659[0m
[34m[12:35:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[63]#011train-error:0.1278#011validation-error:0.1652[0m
[34m[12:35:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[64]#011train-error:0.127533#011validation-error:0.1645[0m
[34m[12:35:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[65]#011train-error:0.126333#011validation-error:0.1641[0m
[34m[12:35:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[66]#011train-error:0.125467#011validation-error:0.1634[0m
[34m[12:35:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[67]#011train-error:0.124733#011validation-error:0.1628[0m
[34m[12:35:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[68]#011train-error:0.123867#011validation-error:0.162[0m
[34m[12:35:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[69]#011train-error:0.1238#011validation-error:0.1612[0m
[34m[12:35:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[70]#011train-error:0.122533#011validation-error:0.1619[0m
[34m[12:35:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[71]#011train-error:0.123067#011validation-error:0.1623[0m
[34m[12:35:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[72]#011train-error:0.122933#011validation-error:0.1621[0m
[34m[12:35:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[73]#011train-error:0.122133#011validation-error:0.1614[0m
[34m[12:35:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[74]#011train-error:0.1212#011validation-error:0.1607[0m
[34m[12:35:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[75]#011train-error:0.121067#011validation-error:0.1608[0m
[34m[12:35:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[76]#011train-error:0.1198#011validation-error:0.1607[0m
[34m[12:35:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[77]#011train-error:0.119733#011validation-error:0.1612[0m
[34m[12:35:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[78]#011train-error:0.1194#011validation-error:0.16[0m
[34m[12:36:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[79]#011train-error:0.118867#011validation-error:0.1599[0m
[34m[12:36:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[80]#011train-error:0.1174#011validation-error:0.1589[0m
[34m[12:36:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[81]#011train-error:0.117067#011validation-error:0.1581[0m
[34m[12:36:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[82]#011train-error:0.1168#011validation-error:0.1578[0m
[34m[12:36:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[83]#011train-error:0.1158#011validation-error:0.157[0m
[34m[12:36:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[84]#011train-error:0.115267#011validation-error:0.1569[0m
[34m[12:36:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[85]#011train-error:0.115533#011validation-error:0.1562[0m
[34m[12:36:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[86]#011train-error:0.115267#011validation-error:0.1555[0m
[34m[12:36:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[87]#011train-error:0.114667#011validation-error:0.1556[0m
[34m[12:36:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[88]#011train-error:0.113267#011validation-error:0.1551[0m
[34m[12:36:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[89]#011train-error:0.112133#011validation-error:0.1557[0m
[34m[12:36:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[90]#011train-error:0.112467#011validation-error:0.1554[0m
[34m[12:36:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[91]#011train-error:0.1112#011validation-error:0.1548[0m
[34m[12:36:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[92]#011train-error:0.1116#011validation-error:0.1544[0m
[34m[12:36:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[93]#011train-error:0.1112#011validation-error:0.1543[0m
[34m[12:36:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[94]#011train-error:0.110333#011validation-error:0.154[0m
[34m[12:36:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[95]#011train-error:0.109667#011validation-error:0.1538[0m
[34m[12:36:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[96]#011train-error:0.109267#011validation-error:0.1537[0m
[34m[12:36:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[97]#011train-error:0.108067#011validation-error:0.1528[0m
[34m[12:36:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[98]#011train-error:0.1078#011validation-error:0.1528[0m
[34m[12:36:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[99]#011train-error:0.107267#011validation-error:0.1519[0m
[34m[12:36:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[100]#011train-error:0.106467#011validation-error:0.1522[0m
[34m[12:36:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[101]#011train-error:0.1062#011validation-error:0.1524[0m
[34m[12:36:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[102]#011train-error:0.106#011validation-error:0.1523[0m
[34m[12:36:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[103]#011train-error:0.1056#011validation-error:0.1521[0m
[34m[12:36:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[104]#011train-error:0.106067#011validation-error:0.1513[0m
[34m[12:36:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[105]#011train-error:0.1058#011validation-error:0.1512[0m
[34m[12:36:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[106]#011train-error:0.105467#011validation-error:0.1515[0m
[34m[12:36:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[107]#011train-error:0.104867#011validation-error:0.1514[0m
[34m[12:36:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[108]#011train-error:0.104867#011validation-error:0.1511[0m
[34m[12:36:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[109]#011train-error:0.103933#011validation-error:0.1504[0m
[34m[12:36:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[110]#011train-error:0.103933#011validation-error:0.15[0m
[34m[12:36:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[111]#011train-error:0.1036#011validation-error:0.1493[0m
[34m[12:36:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[112]#011train-error:0.103733#011validation-error:0.1497[0m
[34m[12:36:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[113]#011train-error:0.103#011validation-error:0.1499[0m
[34m[12:36:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[114]#011train-error:0.102267#011validation-error:0.1485[0m
[34m[12:36:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[115]#011train-error:0.101933#011validation-error:0.1488[0m
[34m[12:36:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[116]#011train-error:0.1014#011validation-error:0.1494[0m
[34m[12:36:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[117]#011train-error:0.100133#011validation-error:0.1483[0m
[34m[12:36:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[118]#011train-error:0.099133#011validation-error:0.1486[0m
[34m[12:36:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[119]#011train-error:0.098667#011validation-error:0.1486[0m
[34m[12:36:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[120]#011train-error:0.099133#011validation-error:0.1489[0m
[34m[12:36:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[121]#011train-error:0.098467#011validation-error:0.1477[0m
[34m[12:36:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[122]#011train-error:0.098733#011validation-error:0.1469[0m
[34m[12:36:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[123]#011train-error:0.0976#011validation-error:0.1469[0m
[34m[12:36:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[124]#011train-error:0.096533#011validation-error:0.1463[0m
[34m[12:36:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[125]#011train-error:0.095867#011validation-error:0.1467[0m
[34m[12:37:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[126]#011train-error:0.0956#011validation-error:0.1469[0m
[34m[12:37:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[127]#011train-error:0.0958#011validation-error:0.147[0m
[34m[12:37:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[128]#011train-error:0.095267#011validation-error:0.1476[0m
[34m[12:37:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[129]#011train-error:0.095267#011validation-error:0.1468[0m
[34m[12:37:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[130]#011train-error:0.0952#011validation-error:0.1465[0m
[34m[12:37:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[131]#011train-error:0.094733#011validation-error:0.1466[0m
[34m[12:37:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[132]#011train-error:0.094667#011validation-error:0.1463[0m
[34m[12:37:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[133]#011train-error:0.094533#011validation-error:0.1458[0m
[34m[12:37:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[134]#011train-error:0.094067#011validation-error:0.1458[0m
[34m[12:37:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[135]#011train-error:0.0936#011validation-error:0.1456[0m
[34m[12:37:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[136]#011train-error:0.093467#011validation-error:0.146[0m
[34m[12:37:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[137]#011train-error:0.092867#011validation-error:0.1455[0m
[34m[12:37:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[138]#011train-error:0.091667#011validation-error:0.1457[0m
[34m[12:37:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[139]#011train-error:0.091467#011validation-error:0.146[0m
[34m[12:37:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[140]#011train-error:0.0912#011validation-error:0.1468[0m
[34m[12:37:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[141]#011train-error:0.0904#011validation-error:0.1461[0m
[34m[12:37:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[142]#011train-error:0.089667#011validation-error:0.1444[0m
[34m[12:37:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[143]#011train-error:0.089133#011validation-error:0.1451[0m
[34m[12:37:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[144]#011train-error:0.0888#011validation-error:0.1439[0m
[34m[12:37:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[145]#011train-error:0.088667#011validation-error:0.1435[0m
[34m[12:37:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[146]#011train-error:0.089133#011validation-error:0.1442[0m
[34m[12:37:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[147]#011train-error:0.088467#011validation-error:0.1449[0m
[34m[12:37:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[148]#011train-error:0.088667#011validation-error:0.1437[0m
[34m[12:37:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[149]#011train-error:0.087933#011validation-error:0.1444[0m
[34m[12:37:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[150]#011train-error:0.087333#011validation-error:0.1449[0m
[34m[12:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[151]#011train-error:0.0874#011validation-error:0.1452[0m
[34m[12:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[152]#011train-error:0.087333#011validation-error:0.1452[0m
[34m[12:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[153]#011train-error:0.0872#011validation-error:0.1455[0m
[34m[12:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[154]#011train-error:0.087#011validation-error:0.1451[0m
[34m[12:37:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[155]#011train-error:0.0872#011validation-error:0.1452[0m
[34mStopping. Best iteration:[0m
[34m[145]#011train-error:0.088667#011validation-error:0.1435
[0m
2020-05-19 12:37:48 Uploading - Uploading generated training model
2020-05-19 12:37:48 Completed - Training job completed
Training seconds: 251
Billable seconds: 251
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
......................[34mArguments: serve[0m
[34m[2020-05-19 12:41:28 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-05-19 12:41:28 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-05-19 12:41:28 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-05-19 12:41:28 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-05-19 12:41:28 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-05-19 12:41:29 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-05-19:12:41:29:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-05-19 12:41:29 +0000] [41] [INFO] Booting worker with pid: 41[0m
[34m[2020-05-19:12:41:29:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-05-19:12:41:29:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-05-19:12:41:29:INFO] Model loaded successfully for worker : 41[0m
[32m2020-05-19T12:41:47.014:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-05-19:12:41:49:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:49:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:49:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:49:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:49:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:49:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:49:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:50:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:50:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:50:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:50:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:51:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:51:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:52:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:52:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:52:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:51:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:51:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:52:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:52:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:52:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:52:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:52:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:55:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:55:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:55:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:55:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:56:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:56:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:56:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:56:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:56:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:41:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:56:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:41:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:41:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:41:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:00:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:00:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:01:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:01:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:01:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:01:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:01:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:01:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:04:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:04:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:05:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:05:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:05:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:05:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:06:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:06:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:06:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:06:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:06:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:06:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:42:12:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:42:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:42:12:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:42:12:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-us-east-1-133790504590/xgboost-2020-05-19-12-38-00-826/test.csv.out to ../data/sentiment_update/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
.........................[34mArguments: serve[0m
[34m[2020-05-19 12:50:05 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-05-19 12:50:05 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-05-19 12:50:05 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-05-19 12:50:05 +0000] [37] [INFO] Booting worker with pid: 37[0m
[34m[2020-05-19 12:50:05 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-05-19 12:50:05 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-05-19 12:50:05 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-05-19:12:50:05:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-05-19:12:50:05:INFO] Model loaded successfully for worker : 37[0m
[34m[2020-05-19:12:50:05:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-05-19:12:50:05:INFO] Model loaded successfully for worker : 40[0m
[32m2020-05-19T12:50:23.729:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:26:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:26:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:29:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:29:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:34:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:34:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:36:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:36:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:38:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:38:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:38:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:38:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:39:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:39:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:39:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:39:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:39:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:39:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:39:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:39:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:39:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:39:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:39:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:39:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:41:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:41:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:41:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:41:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:43:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:43:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:43:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:43:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:43:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:43:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:44:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:44:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:44:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:44:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:44:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:44:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:44:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:46:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:46:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:48:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:48:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:48:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:48:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:48:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:48:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:48:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:48:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:12:50:49:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:12:50:49:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:12:50:49:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:12:50:49:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-us-east-1-133790504590/xgboost-2020-05-19-12-46-12-480/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
WARNING:sagemaker:Using already existing model: xgboost-2020-05-19-12-30-44-408
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
(['watch', 'lion', 'king', 'lion', 'king', 'ii', 'enjoy', 'thoroughli', 'thought', 'lion', 'king', '1', '5', 'might', 'worth', 'watch', 'disappoint', 'disney', 'must', 'get', 'desper', 'revenu', 'especi', 'lost', 'deal', 'pixar', 'basic', 'pick', 'bit', 'footag', 'left', 'editor', 'floor', 'garbag', 'glu', 'togeth', 'make', 'aquick', 'buck', 'unlik', 'lk', 'ii', 'strong', 'stori', 'line', 'movi', 'hardli', 'stori', 'charact', 'anim', 'alway', 'fun', 'look', 'simpli', 'enough', 'materi', 'movi', 'bit', 'could', 'good', '2nd', 'disk', 'filler', 'origin', 'offer', 'disney', 'shame', 'put', 'trash', 'make', 'quick', 'buck', 'next', 'time', 'take', 'time', 'effort', 'put', 'endur', 'work'], 0)
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
{'playboy', 'ghetto', '21st', 'victorian', 'reincarn', 'weari', 'spill'}
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
{'orchestr', 'optimist', 'dubiou', 'omin', 'masterson', 'banana', 'sophi'}
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
2020-05-19 13:03:48 Starting - Starting the training job...
2020-05-19 13:03:50 Starting - Launching requested ML instances......
2020-05-19 13:05:07 Starting - Preparing the instances for training......
2020-05-19 13:06:01 Downloading - Downloading input data...
2020-05-19 13:06:37 Training - Downloading the training image..[34mArguments: train[0m
[34m[2020-05-19:13:06:58:INFO] Running standalone xgboost training.[0m
[34m[2020-05-19:13:06:58:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8477.87mb[0m
[34m[2020-05-19:13:06:58:INFO] Determined delimiter of CSV input is ','[0m
[34m[13:06:58] S3DistributionType set as FullyReplicated[0m
[34m[13:07:00] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-05-19:13:07:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[13:07:00] S3DistributionType set as FullyReplicated[0m
[34m[13:07:01] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
2020-05-19 13:06:57 Training - Training image download completed. Training in progress.[34m[13:07:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.2982#011validation-error:0.2978[0m
[34mMultiple eval metrics have been passed: 'validation-error' will be used for early stopping.
[0m
[34mWill train until validation-error hasn't improved in 10 rounds.[0m
[34m[13:07:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.291133#011validation-error:0.2935[0m
[34m[13:07:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.280267#011validation-error:0.2816[0m
[34m[13:07:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.270333#011validation-error:0.2764[0m
[34m[13:07:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.255467#011validation-error:0.266[0m
[34m[13:07:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.2556#011validation-error:0.2658[0m
[34m[13:07:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.2482#011validation-error:0.2585[0m
[34m[13:07:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.240933#011validation-error:0.2506[0m
[34m[13:07:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.234733#011validation-error:0.2462[0m
[34m[13:07:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.228467#011validation-error:0.2435[0m
[34m[13:07:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.224267#011validation-error:0.2415[0m
[34m[13:07:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.220733#011validation-error:0.2401[0m
[34m[13:07:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.216467#011validation-error:0.2369[0m
[34m[13:07:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.2136#011validation-error:0.2328[0m
[34m[13:07:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.210333#011validation-error:0.2293[0m
[34m[13:07:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.209#011validation-error:0.2272[0m
[34m[13:07:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.203867#011validation-error:0.2256[0m
[34m[13:07:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.201733#011validation-error:0.2234[0m
[34m[13:07:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.196267#011validation-error:0.2185[0m
[34m[13:07:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[19]#011train-error:0.1942#011validation-error:0.2176[0m
[34m[13:07:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.1906#011validation-error:0.2155[0m
[34m[13:07:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.188#011validation-error:0.2125[0m
[34m[13:07:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.1862#011validation-error:0.2112[0m
[34m[13:07:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.185267#011validation-error:0.2087[0m
[34m[13:07:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.183533#011validation-error:0.2081[0m
[34m[13:07:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.1808#011validation-error:0.2066[0m
[34m[13:07:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.180333#011validation-error:0.2073[0m
[34m[13:07:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.177933#011validation-error:0.205[0m
[34m[13:07:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[28]#011train-error:0.175133#011validation-error:0.2039[0m
[34m[13:07:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.172867#011validation-error:0.2029[0m
[34m[13:07:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.171667#011validation-error:0.2008[0m
[34m[13:07:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.170467#011validation-error:0.1997[0m
[34m[13:07:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.1692#011validation-error:0.198[0m
[34m[13:07:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.167667#011validation-error:0.1976[0m
[34m[13:07:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.1666#011validation-error:0.1981[0m
[34m[13:07:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.165667#011validation-error:0.1975[0m
[34m[13:07:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.1648#011validation-error:0.1942[0m
[34m[13:07:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.161333#011validation-error:0.1926[0m
[34m[13:07:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.159267#011validation-error:0.1916[0m
[34m[13:07:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.1594#011validation-error:0.1908[0m
[34m[13:07:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[40]#011train-error:0.158333#011validation-error:0.1919[0m
[34m[13:07:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[41]#011train-error:0.1574#011validation-error:0.1912[0m
[34m[13:07:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[42]#011train-error:0.154933#011validation-error:0.1901[0m
[34m[13:08:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[43]#011train-error:0.155067#011validation-error:0.1902[0m
[34m[13:08:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[44]#011train-error:0.1538#011validation-error:0.1911[0m
[34m[13:08:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[45]#011train-error:0.153067#011validation-error:0.1925[0m
[34m[13:08:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[46]#011train-error:0.1524#011validation-error:0.1923[0m
[34m[13:08:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[47]#011train-error:0.151467#011validation-error:0.1922[0m
[34m[13:08:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[48]#011train-error:0.1478#011validation-error:0.1911[0m
[34m[13:08:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[49]#011train-error:0.1468#011validation-error:0.1904[0m
[34m[13:08:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[50]#011train-error:0.147333#011validation-error:0.1902[0m
[34m[13:08:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[51]#011train-error:0.1458#011validation-error:0.1895[0m
[34m[13:08:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[52]#011train-error:0.145933#011validation-error:0.1889[0m
[34m[13:08:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[53]#011train-error:0.1444#011validation-error:0.1889[0m
[34m[13:08:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[54]#011train-error:0.143933#011validation-error:0.1877[0m
[34m[13:08:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[55]#011train-error:0.144067#011validation-error:0.1875[0m
[34m[13:08:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[56]#011train-error:0.144#011validation-error:0.1866[0m
[34m[13:08:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[57]#011train-error:0.142133#011validation-error:0.1867[0m
[34m[13:08:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[58]#011train-error:0.142#011validation-error:0.1866[0m
[34m[13:08:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[59]#011train-error:0.141267#011validation-error:0.186[0m
[34m[13:08:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[60]#011train-error:0.140733#011validation-error:0.1849[0m
[34m[13:08:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[61]#011train-error:0.140333#011validation-error:0.1855[0m
[34m[13:08:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[62]#011train-error:0.139733#011validation-error:0.1843[0m
[34m[13:08:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[63]#011train-error:0.138267#011validation-error:0.1845[0m
[34m[13:08:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[64]#011train-error:0.139#011validation-error:0.1831[0m
[34m[13:08:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[65]#011train-error:0.137467#011validation-error:0.183[0m
[34m[13:08:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[66]#011train-error:0.137133#011validation-error:0.1829[0m
[34m[13:08:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[67]#011train-error:0.137133#011validation-error:0.1828[0m
[34m[13:08:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[68]#011train-error:0.1364#011validation-error:0.1829[0m
[34m[13:08:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[69]#011train-error:0.135533#011validation-error:0.1811[0m
[34m[13:08:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[70]#011train-error:0.135733#011validation-error:0.1818[0m
[34m[13:08:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[71]#011train-error:0.135067#011validation-error:0.1812[0m
[34m[13:08:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[72]#011train-error:0.1342#011validation-error:0.1816[0m
[34m[13:08:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[73]#011train-error:0.132933#011validation-error:0.1797[0m
[34m[13:08:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[74]#011train-error:0.132333#011validation-error:0.1798[0m
[34m[13:08:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[75]#011train-error:0.131867#011validation-error:0.1786[0m
[34m[13:08:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[76]#011train-error:0.131067#011validation-error:0.1781[0m
[34m[13:08:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[77]#011train-error:0.130733#011validation-error:0.1773[0m
[34m[13:08:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[78]#011train-error:0.129933#011validation-error:0.1765[0m
[34m[13:08:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[79]#011train-error:0.129333#011validation-error:0.1773[0m
[34m[13:08:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[80]#011train-error:0.128667#011validation-error:0.1786[0m
[34m[13:08:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[81]#011train-error:0.129133#011validation-error:0.178[0m
[34m[13:08:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[82]#011train-error:0.128267#011validation-error:0.1781[0m
[34m[13:08:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[83]#011train-error:0.127733#011validation-error:0.1778[0m
[34m[13:08:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[84]#011train-error:0.127467#011validation-error:0.1779[0m
[34m[13:08:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[85]#011train-error:0.1268#011validation-error:0.1783[0m
[34m[13:08:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[86]#011train-error:0.127067#011validation-error:0.1779[0m
[34m[13:08:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[87]#011train-error:0.1262#011validation-error:0.1781[0m
[34m[13:08:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[88]#011train-error:0.125067#011validation-error:0.1783[0m
[34mStopping. Best iteration:[0m
[34m[78]#011train-error:0.129933#011validation-error:0.1765
[0m
2020-05-19 13:09:06 Uploading - Uploading generated training model
2020-05-19 13:09:06 Completed - Training job completed
Training seconds: 185
Billable seconds: 185
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
........................[34mArguments: serve[0m
[34m[2020-05-19 13:13:21 +0000] [1] [INFO] Starting gunicorn 19.7.1[0m
[34m[2020-05-19 13:13:21 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)[0m
[34m[2020-05-19 13:13:21 +0000] [1] [INFO] Using worker: gevent[0m
[34m[2020-05-19 13:13:21 +0000] [38] [INFO] Booting worker with pid: 38[0m
[34m[2020-05-19 13:13:21 +0000] [39] [INFO] Booting worker with pid: 39[0m
[34m[2020-05-19 13:13:21 +0000] [40] [INFO] Booting worker with pid: 40[0m
[34m[2020-05-19 13:13:21 +0000] [41] [INFO] Booting worker with pid: 41[0m
[34m[2020-05-19:13:13:21:INFO] Model loaded successfully for worker : 39[0m
[34m[2020-05-19:13:13:21:INFO] Model loaded successfully for worker : 38[0m
[34m[2020-05-19:13:13:21:INFO] Model loaded successfully for worker : 40[0m
[34m[2020-05-19:13:13:21:INFO] Model loaded successfully for worker : 41[0m
[32m2020-05-19T13:13:51.624:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:54:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:57:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:59:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:59:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:13:59:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:13:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:59:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:59:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:59:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:13:59:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:13:59:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:00:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:00:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:00:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:00:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:02:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:02:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:06:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:06:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:06:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:06:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:06:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:06:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:07:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:07:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:07:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:07:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:09:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:09:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:11:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:11:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:11:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:11:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:12:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:12:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:12:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:12:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:14:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:14:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:14:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:14:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:14:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:14:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:14:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:14:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:14:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:14:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:14:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:14:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:16:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:16:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:16:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:16:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:16:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:16:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:16:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:16:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:16:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:16:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-05-19:13:14:17:INFO] Sniff delimiter as ','[0m
[34m[2020-05-19:13:14:17:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-05-19:13:14:17:INFO] Sniff delimiter as ','[0m
[35m[2020-05-19:13:14:17:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-us-east-1-133790504590/xgboost-2020-05-19-13-09-33-416/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
---------------!
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# # First we will remove all of the files contained in the data_dir directory
# !rm $data_dir/*
# # And then we delete the directory itself
# !rmdir $data_dir
# # Similarly we will remove the files in the cache_dir directory and the directory itself
# !rm $cache_dir/*
# !rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
_____no_output_____
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
_____no_output_____
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
_____no_output_____
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
_____no_output_____
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
_____no_output_____
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
_____no_output_____
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
_____no_output_____
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
_____no_output_____
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
_____no_output_____
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
_____no_output_____
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____
###Markdown
Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2019-01-28 14:13:28-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 6.25MB/s in 17s
2019-01-28 14:13:45 (4.75 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
###Output
_____no_output_____
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
INFO:sagemaker:Creating training-job with name: xgboost-2019-01-28-16-10-58-214
###Markdown
Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
###Code
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
INFO:sagemaker:Creating model with name: xgboost-2019-01-28-16-10-58-214
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
INFO:sagemaker:Creating transform job with name: xgboost-2019-01-28-16-16-44-930
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
.........................................!
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
###Code
import new_data
new_X, new_Y = new_data.get_new_data()
###Output
_____no_output_____
###Markdown
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
###Code
# TODO: Create the CountVectorizer using the previously constructed vocabulary
# vectorizer = None
# Solution:
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x)
# TODO: Transform our new data set and store the transformed data in the variable new_XV
# new_XV = None
# Solution
new_XV = vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
###Code
len(new_XV[100])
###Output
_____no_output_____
###Markdown
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
###Code
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
# Solution:
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3.
###Code
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
# new_data_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
###Code
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
# Solution:
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
###Output
INFO:sagemaker:Creating transform job with name: xgboost-2019-01-28-16-22-21-140
###Markdown
As usual, we copy the results of the batch transform job to our local instance.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-ap-south-1-651711011978/xgboost-2019-01-28-16-16-44-930/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
download: s3://sagemaker-ap-south-1-651711011978/xgboost-2019-01-28-16-16-44-930/test.csv.out to ../data/sentiment_update/test.csv.out
###Markdown
Read in the results of the batch transform job.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
###Output
_____no_output_____
###Markdown
And check the accuracy of our current model.
###Code
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model.
###Code
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
# xgb_predictor = None
# Solution:
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.t2.medium')
###Output
INFO:sagemaker:Creating model with name: xgboost-2019-01-28-16-25-59-549
INFO:sagemaker:Creating endpoint with name xgboost-2019-01-28-16-10-58-214
###Markdown
Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
###Code
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
###Output
_____no_output_____
###Markdown
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
###Code
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
###Output
_____no_output_____
###Markdown
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
###Code
print(next(gn))
###Output
(['like', 'big', 'mechan', 'toy', 'say', 'charact', 'earli', 'comment', 'jaw', 'well', 'blood', 'surf', 'would', 'wish', 'beast', 'convinc', 'shark', 'jaw', 'seri', 'word', 'digit', 'special', 'effect', 'movi', 'terribl', 'act', 'direct', 'much', 'better', 'either', 'seem', 'suit', 'deodor', 'bubbl', 'gum', 'commerci', 'horror', 'movi', 'attitud', 'peopl', 'work', 'film', 'show', 'contempt', 'genr', 'audienc', 'say', 'like', 'film', 'encourag', 'filmmak', 'offer', 'us', 'crap', 'destroy', 'poor', 'horror', 'genr', '1', '2', 'banana'], 1)
###Markdown
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data.
###Code
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
###Output
_____no_output_____
###Markdown
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
###Code
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
###Output
_____no_output_____
###Markdown
We can look at the words that were in the original vocabulary but not in the new vocabulary.
###Code
print(original_vocabulary - new_vocabulary)
###Output
{'playboy', 'victorian', 'weari', 'spill', 'ghetto', '21st', 'reincarn'}
###Markdown
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
###Code
print(new_vocabulary - original_vocabulary)
###Output
{'orchestr', 'omin', 'dubiou', 'banana', 'optimist', 'masterson', 'sophi'}
###Markdown
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
###Code
new_XV = new_vectorizer.transform(new_X).toarray()
###Output
_____no_output_____
###Markdown
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
###Code
len(new_XV[0])
###Output
_____no_output_____
###Markdown
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
###Code
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
###Output
_____no_output_____
###Markdown
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
###Code
new_X = None
###Output
_____no_output_____
###Markdown
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
###Code
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
###Code
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
###Output
_____no_output_____
###Markdown
Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3.
###Code
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
# new_data_location = None
# new_val_location = None
# new_train_location = None
# Solution:
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object.
###Code
# TODO: First, create a SageMaker estimator object for our model.
# new_xgb = None
# Solution:
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
# Solution:
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
###Output
_____no_output_____
###Markdown
Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model.
###Code
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = None
s3_new_input_validation = None
# Solution:
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
# Solution:
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
###Output
INFO:sagemaker:Creating training-job with name: xgboost-2019-01-28-16-43-42-302
###Markdown
(TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model.
###Code
# TODO: Create a transformer object from the new_xgb model
# new_xgb_transformer = None
# Solution:
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
INFO:sagemaker:Creating model with name: xgboost-2019-01-28-16-43-42-302
###Markdown
Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
###Code
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
# Solution:
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
###Output
INFO:sagemaker:Creating transform job with name: xgboost-2019-01-28-16-54-25-329
###Markdown
Copy the results to our local instance.
###Code
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
###Output
download: s3://sagemaker-ap-south-1-651711011978/xgboost-2019-01-28-16-54-25-329/new_data.csv.out to ../data/sentiment_update/new_data.csv.out
###Markdown
And see how well the model did.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
###Output
_____no_output_____
###Markdown
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
###Code
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary.
###Code
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
# test_X = None
# Solution:
test_X = new_vectorizer.transform(test_X).toarray()
###Output
_____no_output_____
###Markdown
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
###Code
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
###Output
_____no_output_____
###Markdown
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
###Code
new_xgb_transformer.model_name
###Output
_____no_output_____
###Markdown
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
###Code
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
# new_xgb_endpoint_config_name = None
# Solution:
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
# new_xgb_endpoint_config_info = None
# Solution:
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.t2.medium",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
###Output
_____no_output_____
###Markdown
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
###Code
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
# Solution:
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
###Output
_____no_output_____
###Markdown
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
###Code
session.wait_for_endpoint(xgb_predictor.endpoint)
###Output
------------------------------------------------------------------------!
###Markdown
Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
###Code
xgb_predictor.delete_endpoint()
###Output
INFO:sagemaker:Deleting endpoint with name: xgboost-2019-01-28-16-10-58-214
###Markdown
Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____ |
dev/_trush/mooc_python-machine-learning/Assignment+1.ipynb | ###Markdown
---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._--- Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
#print(cancer.DESCR) # Print the data set description
###Output
_____no_output_____
###Markdown
The object returned by `load_breast_cancer()` is a scikit-learn Bunch object, which is similar to a dictionary.
###Code
cancer.keys()
###Output
_____no_output_____
###Markdown
Question 0 (Example)How many features does the breast cancer dataset have?*This function should return an integer.*
###Code
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the number of features of the breast cancer dataset, which is an integer.
# The assignment question description will tell you the general format the autograder is expecting
return len(cancer['feature_names'])
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset `cancer` to a DataFrame. *This function should return a `(569, 31)` DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target']*and index = * RangeIndex(start=0, stop=569, step=1)
###Code
def answer_one():
# Your code here
cancerDF = pd.DataFrame(cancer['data'], index=range(0, 569, 1))
cancerDF.columns=cancer['feature_names']
cancerDF['target'] = cancer['target']
return cancerDF # Return your answer
answer_one()
###Output
_____no_output_____
###Markdown
Question 2What is the class distribution? (i.e. how many instances of `malignant` (encoded 0) and how many `benign` (encoded 1)?)*This function should return a Series named `target` of length 2 with integer values and index =* `['malignant', 'benign']`
###Code
def answer_two():
cancerdf = answer_one()
# Your code here
classDistribution = cancerdf['target'].value_counts()
classDistribution.index = ['benign', 'malignant']
return classDistribution # Return your answer
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Split the DataFrame into `X` (the data) and `y` (the labels).*This function should return a tuple of length 2:* `(X, y)`*, where* * `X` *has shape* `(569, 30)`* `y` *has shape* `(569,)`.
###Code
print
def answer_three():
cancerdf = answer_one()
# Your code here
X = cancerdf.loc[:, cancerdf.columns != 'target']
y = cancerdf.loc[:,'target']
return X,y
answer_three()
###Output
_____no_output_____
###Markdown
Question 4Using `train_test_split`, split `X` and `y` into training and test sets `(X_train, X_test, y_train, and y_test)`.**Set the random number generator state to 0 using `random_state=0` to make sure your results match the autograder!***This function should return a tuple of length 4:* `(X_train, X_test, y_train, y_test)`*, where* * `X_train` *has shape* `(426, 30)`* `X_test` *has shape* `(143, 30)`* `y_train` *has shape* `(426,)`* `y_test` *has shape* `(143,)`
###Code
from sklearn.model_selection import train_test_split
def answer_four():
X, y = answer_three()
# Your code here
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
return X_train, X_test, y_train, y_test
###Output
_____no_output_____
###Markdown
Question 5Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with `X_train`, `y_train` and using one nearest neighbor (`n_neighbors = 1`).*This function should return a * `sklearn.neighbors.classification.KNeighborsClassifier`.
###Code
from sklearn.neighbors import KNeighborsClassifier
def answer_five():
X_train, X_test, y_train, y_test = answer_four()
# Your code here
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(X_train, y_train)
return knn # Return your answer
answer_five()
###Output
_____no_output_____
###Markdown
Question 6Using your knn classifier, predict the class label using the mean value for each feature.Hint: You can use `cancerdf.mean()[:-1].values.reshape(1, -1)` which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).*This function should return a numpy array either `array([ 0.])` or `array([ 1.])`*
###Code
def answer_six():
cancerdf = answer_one()
means = cancerdf.mean()[:-1].values.reshape(1, -1)
# Your code here
knn = answer_five()
y_predict = knn.predict(means)
return y_predict # Return your answer
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Using your knn classifier, predict the class labels for the test set `X_test`.*This function should return a numpy array with shape `(143,)` and values either `0.0` or `1.0`.*
###Code
def answer_seven():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
y_predict = knn.predict(X_test)
return y_predict # Return your answer
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8Find the score (mean accuracy) of your knn classifier using `X_test` and `y_test`.*This function should return a float between 0 and 1*
###Code
def answer_eight():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
score = knn.score(X_test, y_test)
return score# Return your answer
answer_eight()
###Output
_____no_output_____
###Markdown
Optional plotTry using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.
###Code
def accuracy_plot():
import matplotlib.pyplot as plt
%matplotlib notebook
X_train, X_test, y_train, y_test = answer_four()
# Find the training and testing accuracies by target value (i.e. malignant, benign)
mal_train_X = X_train[y_train==0]
mal_train_y = y_train[y_train==0]
ben_train_X = X_train[y_train==1]
ben_train_y = y_train[y_train==1]
mal_test_X = X_test[y_test==0]
mal_test_y = y_test[y_test==0]
ben_test_X = X_test[y_test==1]
ben_test_y = y_test[y_test==1]
knn = answer_five()
scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y),
knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)]
plt.figure()
# Plot the scores as a bar chart
bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868'])
# directly label the score onto the bars
for bar in bars:
height = bar.get_height()
plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2),
ha='center', color='w', fontsize=11)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8);
plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8)
# Uncomment the plotting function to see the visualization,
# Comment out the plotting function when submitting your notebook for grading
#accuracy_plot()
###Output
_____no_output_____ |
notebooks/180607 - UFTPR Wind Spatio-temporal exploring.ipynb | ###Markdown
A medida de correlacao para essa base deve ser diferente. Primeiro que as turbinas ficam bem próximas. Segundo que velocidade do vento e geração de energia ocorrem a valores similares.A ideia nesse caso seria plotar uma tabela de correlação entre as turbinas e um histograma de correlação cruzada entre duas delas. Citar tambem o trabalho "Forecasting wind speed with recurrent neural networks" de Qing CaoCitar tambem o trabalho do Kusiak - "Estimation of wind speed: A data-driven approach"1 - Carregar os dados2 - Observar medias e desvios padrao para geracao de energia e velocidade do vento. Esoclher o de maior variaçao3 - Calcular correlacao cruzada Pearson entre todas as turbinas4 - Fazer calculo time-lagged para turbinas mais correlacionadas
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Leitura dos Dados
###Code
wind_file = '/Users/cseveriano/spatio-temporal-forecasting/data/raw/WindFarm-RN/HISTORICO.xlsx'
df = pd.read_excel(wind_file, index_col=0)
#conversao para float, porque o valor lido originalmente é object
for col in df.columns:
df[col] = df[col].convert_objects(convert_numeric=True)
wind_power_df = df.iloc[:,0:15]
wind_speed_df = df.iloc[:,15:30]
wind_direction_df = df.iloc[:,30:45]
# Muitos NaN na base (cerca de 2000 em 52000). Como a irregularidade da serie nao esta ligada diretamente ao ciclo diario, os dados serao removidos
wind_power_df.dropna(inplace=True)
wind_speed_df.dropna(inplace=True)
wind_direction_df.dropna(inplace=True)
wind_power_df.columns = ['WTG01_Power','WTG02_Power','WTG03_Power','WTG04_Power','WTG05_Power','WTG06_Power','WTG07_Power','WTG08_Power','WTG09_Power','WTG10_Power','WTG11_Power','WTG12_Power','WTG13_Power','WTG24_Power','WTG25_Power']
wind_speed_df.columns = ['WTG01_Speed','WTG02_Speed','WTG03_Speed','WTG04_Speed','WTG05_Speed','WTG06_Speed','WTG07_Speed','WTG08_Speed','WTG09_Speed','WTG10_Speed','WTG11_Speed','WTG12_Speed','WTG13_Speed','WTG24_Speed','WTG25_Speed']
wind_direction_df.columns = ['WTG01_Dir','WTG02_Dir','WTG03_Dir','WTG04_Dir','WTG05_Dir','WTG06_Dir','WTG07_Dir','WTG08_Dir','WTG09_Dir','WTG10_Dir','WTG11_Dir','WTG12_Dir','WTG13_Dir','WTG24_Dir','WTG25_Dir']
wind_total_df = pd.concat([wind_power_df, wind_speed_df, wind_direction_df], axis=1)
wind_total_df.dropna(inplace=True)
wind_total_df.to_pickle("df_wind_total.pkl")
###Output
_____no_output_____
###Markdown
Correlation Map
###Code
#correlation matrix
corrmat = wind_power_df.corr()
f, ax = plt.subplots(figsize=(12, 9))
#sns.heatmap(corrmat, vmax=.8, square=True);
sns.heatmap(corrmat, cbar=True, annot=True, square=True, fmt='.2f', vmin=.5)
###Output
_____no_output_____
###Markdown
Time Lagged Correlation
###Code
import datetime
y_obs_1 = wind_speed_df[wind_speed_df.index.date == datetime.date(2017, 5, 3)].WTG01
y_obs_2 = wind_speed_df[wind_speed_df.index.date == datetime.date(2017, 5, 3)].WTG02
fig = plt.figure()
fig = plt.figure(figsize=(10,8))
ax1 = fig.add_subplot(211)
ax1.xcorr(y_obs_1, y_obs_2, usevlines=True, maxlags=50, normed=True, lw=2)
ax1.grid(True)
#ax1.axhline(0, color='black', lw=2)
y_obs_4 = wind_speed_df[(wind_speed_df.index >= '2017-05-01') & (wind_speed_df.index <= '2017-05-05')].WTG01
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
plt.plot(y_obs_4)
ax.set_xlabel('Days')
ax.set_ylabel('Wind Speed [m/s]')
ax.legend(loc='best')
ax.set_xticklabels(['','01-May','','02-May','','03-May','','04-May'])
plt.show()
y_obs_4 = wind_power_df[(wind_power_df.index >= '2017-05-01') & (wind_power_df.index <= '2017-05-05')].WTG01
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
plt.plot(y_obs_4)
ax.set_xlabel('Days')
ax.set_ylabel('Wind Power [W/m2]')
ax.legend(loc='best')
#ax.set_xticks(['08:00','12:00','16:00','20:00'])
ax.set_xticklabels(['01-May','','02-May','','03-May','','04-May'])
plt.show()
###Output
_____no_output_____
###Markdown
SSA Decomposition
###Code
import sys
sys.path.insert(0,"../src/ext-libraries/SingularSpectrumAnalysis/")
from mySSA import mySSA
df_ssa = wind_speed_df
dt = []
limit = 1
p_inds = [i for i in range(limit)]
df_clean = pd.DataFrame(columns=df_ssa.columns)
df_residual = pd.DataFrame(columns=df_ssa.columns)
chunk_size = 1000
indexes = np.arange(chunk_size,len(df_ssa), chunk_size)
for c in df_ssa.columns:
dfc = df_ssa[c]
cl = []
rs = []
start_ind = 0
for stop_ind in indexes:
ts = dfc[start_ind : stop_ind]
N = int(len(ts)) # number of samples
T = 144 # sample daily frequency (6 samples per hour)
embedding_dimension = int(N / T)
ssa = mySSA(ts)
ssa.embed(embedding_dimension=embedding_dimension,verbose=True)
res_streams = [j for j in range(limit,embedding_dimension)]
ssa.decompose(verbose=True)
principal = ssa.view_reconstruction(*[ssa.Xs[i] for i in p_inds], names=p_inds, plot=False, return_df=True)
residual = ssa.view_reconstruction(*[ssa.Xs[i] for i in res_streams], names=res_streams, plot=False, return_df=True)
cl.extend(list(principal['Reconstruction']))
rs.extend(list(residual['Reconstruction']))
start_ind = stop_ind
print("Clean sequence length for ",c," = ",len(cl))
df_clean[c] = cl
df_residual[c] = rs
df_clean.index = df_ssa.index[0:50000]
df_residual.index = df_ssa.index[0:50000]
df_ssa[0:50000].to_pickle("df_wind_speed.pkl")
df_clean.to_pickle("df_wind_speed_ssa_clean.pkl")
df_residual.to_pickle("df_wind_speed_ssa_residual.pkl")
###Output
_____no_output_____
###Markdown
Comparativo Original SSA
###Code
y_clean_wind = df_clean[0:576].WTG01
y_obs_wind = df_ssa[0:576].WTG01
#x_date = pd.date_range("00:00", "24:00", freq="10min").strftime('%H:%M')
#xn = np.arange(len(x_date))
xn = np.arange(0,576)
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
plt.plot(xn, y_clean_wind.values, label='Trend plus Harmonic')
plt.plot(xn, y_obs_wind.values, label='Original Time Series')
ax.set_xlabel('Time')
ax.set_ylabel('Wind Speed [m/s]')
ax.legend(loc='best')
ticks = [72,216,360,504]
ax.set_xticks(ticks)
#ax.set_xticklabels(x_date[ticks])
ax.set_xticklabels(['01-May','02-May','03-May','04-May'])
plt.show()
y_residual_wind = df_residual[0:576].WTG01
xn = np.arange(0,576)
fig = plt.figure(figsize=(10,8))
# Serie Residuo
ax = fig.add_subplot(211)
plt.plot(xn, y_residual_wind.values)
ax.set_xlabel('Time')
ax.set_title('Noise Residual')
#ax.legend(loc='best')
ticks = [72,216,360,504]
ax.set_xticks(ticks)
ax.set_xticklabels(['01-May','02-May','03-May','04-May'])
# Serie Original
ax2 = fig.add_subplot(212)
plt.plot(xn, y_obs_wind.values)
ax2.set_xlabel('Time')
ax2.set_ylabel('Wind Speed [m/s]')
ax2.set_title('Original Time Series')
#ax2.legend(loc='best')
ticks = [72,216,360,504]
ax2.set_xticks(ticks)
ax2.set_xticklabels(['01-May','02-May','03-May','04-May'])
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
ACF and PACF Plots
###Code
#Set target and input variables
target_station = 'WTG01'
df = pd.read_pickle("df_wind_speed.pkl")
df_ssa_clean = pd.read_pickle("df_wind_speed_ssa_clean.pkl")
df_ssa_residual = pd.read_pickle("df_wind_speed_ssa_residual.pkl")
# Get data form the interval of interest
interval = ((df.index >= '2017-05') & (df.index <= '2018-05'))
df = df.loc[interval]
df_ssa_clean = df_ssa_clean.loc[interval]
df_ssa_residual = df_ssa_residual.loc[interval]
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
import datetime
pivot = df.index[0]
start = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=34)
end = pivot.strftime('%Y-%m-%d')
fig = plot_acf(df_ssa_clean[start:end][target_station].values, lags=15)
fig.savefig("acf_wind_clean.png")
fig = plot_pacf(df_ssa_clean[start:end][target_station].values, lags=15)
fig.savefig("pacf_wind_clean.png")
fig = plot_acf(df_ssa_residual[start:end][target_station].values, lags=15)
fig.savefig("acf_wind_residual.png")
plot_pacf(df_ssa_residual[start:end][target_station].values, lags=15)
fig.savefig("pacf_wind_residual.png")
fig = plot_acf(df[start:end][target_station].values, lags=15)
fig.savefig("acf_wind_raw.png")
fig = plot_pacf(df[start:end][target_station].values, lags=15)
fig.savefig("pacf_wind_raw.png")
###Output
_____no_output_____
###Markdown
Correlation Map after SSA Decomposition
###Code
#correlation matrix
corrmat = df_ssa_residual.corr()
f, ax = plt.subplots(figsize=(12, 9))
#sns.heatmap(corrmat, vmax=.8, square=True);
sns.heatmap(corrmat, cbar=True, annot=True, square=True, fmt='.2f', vmin=.5)
###Output
_____no_output_____ |
examples/WideAndDeepClassifier.ipynb | ###Markdown
训练
###Code
model.fit(X, y, total_steps=10)
###Output
WARNING:tensorflow:From /Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/util/dispatch.py:201: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
###Markdown
推理
###Code
model.predict(X)
###Output
INFO:tensorflow:Time usage 0m-1.08s, 0.93 steps/sec, 1.85 examples/sec
###Markdown
评分
###Code
model.score(X, y)
###Output
INFO:tensorflow:Time usage 0m-0.65s, 1.53 steps/sec, 3.05 examples/sec
|
examples/ami/ami.ipynb | ###Markdown
Settings for paths
###Code
root_dir = Path('data')
output_dir = root_dir / 'ami_nb'
###Output
_____no_output_____
###Markdown
Download the dataset
###Code
download(root_dir)
###Output
_____no_output_____
###Markdown
Prepare audio and supervision manifests
###Code
ami_manifests = prepare_ami(root_dir, output_dir)
###Output
_____no_output_____
###Markdown
Extract features
###Code
example = ami_manifests['dev']
feature_set_builder = FeatureSetBuilder(
feature_extractor=Fbank(),
output_dir=f'{output_dir}/feats_example'
)
feature_set = feature_set_builder.process_and_store_recordings(
recordings=example['audio'],
num_jobs=1
)
example['feats'] = feature_set
example['cuts'] = CutSet.from_manifests(supervision_set=example['supervisions'], feature_set=feature_set)
###Output
_____no_output_____
###Markdown
Make pytorch Dataset for ASR task
###Code
asr_dataset = SpeechRecognitionDataset(example['cuts'].trim_to_supervisions())
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sample = asr_dataset[1]
print(sample['text'])
plt.matshow(sample['features'].transpose(0, 1).flip(0))
###Output
WELCOME EVERYBODY
###Markdown
Make pytorch Dataset for VAD task
###Code
vad_dataset = VadDataset(example['cuts'].cut_into_windows(10.0, keep_excessive_supervisions=True))
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sample = vad_dataset[3]
label_height = 10
vad_label = torch.stack([sample['is_voice'] for i in range(label_height)]).reshape(label_height, 1000)
plt.matshow(vad_label)
plt.matshow(sample['features'].transpose(0, 1).flip(0))
###Output
_____no_output_____
###Markdown
Settings for paths
###Code
root_dir = Path('data')
output_dir = root_dir / 'ami_nb'
###Output
_____no_output_____
###Markdown
Download the dataset
###Code
download(root_dir)
###Output
_____no_output_____
###Markdown
Prepare audio and supervision manifests
###Code
ami_manifests = prepare_ami(root_dir, output_dir)
###Output
_____no_output_____
###Markdown
Extract features
###Code
example = ami_manifests['dev']
with LilcomFilesWriter(f'{output_dir}/feats_example') as storage:
feature_set_builder = FeatureSetBuilder(
feature_extractor=Fbank(),
storage=storage,
)
feature_set = feature_set_builder.process_and_store_recordings(
recordings=example['recordings'],
num_jobs=1
)
example['feats'] = feature_set
example['cuts'] = CutSet.from_manifests(supervisions=example['supervisions'], features=feature_set)
###Output
_____no_output_____
###Markdown
Make pytorch Dataset for ASR task
###Code
asr_dataset = SpeechRecognitionDataset(example['cuts'].trim_to_supervisions())
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sample = asr_dataset[1]
print(sample['text'])
plt.matshow(sample['features'].transpose(0, 1).flip(0))
###Output
WELCOME EVERYBODY
###Markdown
Make pytorch Dataset for VAD task
###Code
vad_dataset = VadDataset(example['cuts'].cut_into_windows(10.0, keep_excessive_supervisions=True))
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sample = vad_dataset[3]
label_height = 10
vad_label = torch.stack([sample['is_voice'] for i in range(label_height)]).reshape(label_height, 1000)
plt.matshow(vad_label)
plt.matshow(sample['features'].transpose(0, 1).flip(0))
###Output
_____no_output_____
###Markdown
Settings for paths
###Code
root_dir = Path('data')
output_dir = root_dir / 'ami_nb'
###Output
_____no_output_____
###Markdown
Download the dataset
###Code
download_ami(root_dir)
###Output
_____no_output_____
###Markdown
Prepare audio and supervision manifests
###Code
ami_manifests = prepare_ami(root_dir, output_dir)
###Output
_____no_output_____
###Markdown
Extract features
###Code
example = ami_manifests['dev']
example['cuts'] = CutSet.from_manifests(
recordings=example['recordings'],
supervisions=example['supervisions']
).compute_and_store_features(
extractor=Fbank(),
storage_path=f'{output_dir}/feats_example',
storage_type=LilcomFilesWriter,
)
###Output
_____no_output_____
###Markdown
Make PyTorch Dataset for ASR task
###Code
asr_dataset = K2SpeechRecognitionDataset(example['cuts'].trim_to_supervisions())
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sampler = SimpleCutSampler(asr_dataset.cuts, shuffle=False, max_cuts=4)
cut_ids = next(iter(sampler))
sample = asr_dataset[cut_ids]
print(sample['supervisions']['text'][0])
plt.matshow(sample['inputs'][0].transpose(0, 1).flip(0))
###Output
_____no_output_____
###Markdown
Make PyTorch Dataset for VAD task
###Code
vad_dataset = VadDataset(example['cuts'].cut_into_windows(10.0, keep_excessive_supervisions=True))
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sampler = SimpleCutSampler(vad_dataset.cuts, shuffle=False, max_cuts=4)
cut_ids = next(iter(sampler))
sample = vad_dataset[cut_ids]
label_height = 10
vad_label = torch.stack([sample['is_voice'] for i in range(label_height)]).reshape(label_height, 1000)
plt.matshow(vad_label)
plt.matshow(sample['inputs'][0].transpose(0, 1).flip(0))
###Output
_____no_output_____
###Markdown
Settings for paths
###Code
root_dir = Path('data')
output_dir = root_dir / 'ami_nb'
###Output
_____no_output_____
###Markdown
Download the dataset
###Code
download_ami(root_dir)
###Output
_____no_output_____
###Markdown
Prepare audio and supervision manifests
###Code
ami_manifests = prepare_ami(root_dir, output_dir)
###Output
_____no_output_____
###Markdown
Extract features
###Code
example = ami_manifests['dev']
example['cuts'] = CutSet.from_manifests(
recordings=example['recordings'],
supervisions=example['supervisions']
).compute_and_store_features(
extractor=Fbank(),
storage_path=f'{output_dir}/feats_example',
storage_type=LilcomFilesWriter,
)
###Output
_____no_output_____
###Markdown
Make PyTorch Dataset for ASR task
###Code
asr_dataset = K2SpeechRecognitionDataset(example['cuts'].trim_to_supervisions())
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sampler = SingleCutSampler(asr_dataset.cuts, shuffle=False, max_cuts=4)
cut_ids = next(iter(sampler))
sample = asr_dataset[cut_ids]
print(sample['supervisions']['text'][0])
plt.matshow(sample['inputs'][0].transpose(0, 1).flip(0))
###Output
_____no_output_____
###Markdown
Make PyTorch Dataset for VAD task
###Code
vad_dataset = VadDataset(example['cuts'].cut_into_windows(10.0, keep_excessive_supervisions=True))
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sampler = SingleCutSampler(vad_dataset.cuts, shuffle=False, max_cuts=4)
cut_ids = next(iter(sampler))
sample = vad_dataset[cut_ids]
label_height = 10
vad_label = torch.stack([sample['is_voice'] for i in range(label_height)]).reshape(label_height, 1000)
plt.matshow(vad_label)
plt.matshow(sample['inputs'][0].transpose(0, 1).flip(0))
###Output
_____no_output_____
###Markdown
Settings for paths
###Code
root_dir = Path('data')
output_dir = root_dir / 'ami_nb'
###Output
_____no_output_____
###Markdown
Download the dataset
###Code
download(root_dir)
###Output
_____no_output_____
###Markdown
Prepare audio and supervision manifests
###Code
ami_manifests = prepare_ami(root_dir, output_dir)
###Output
_____no_output_____
###Markdown
Extract features
###Code
example = ami_manifests['dev']
with LilcomFilesWriter(f'{output_dir}/feats_example') as storage:
cut_set = CutSet.from_manifests(
recordings=example['recordings'],
supervisions=example['supervisions']
).compute_and_store_features(
extractor=Fbank(),
storage=storage,
)
example['cuts'] = cut_set
###Output
_____no_output_____
###Markdown
Make pytorch Dataset for ASR task
###Code
asr_dataset = SpeechRecognitionDataset(example['cuts'].trim_to_supervisions())
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sample = asr_dataset[1]
print(sample['text'])
plt.matshow(sample['features'].transpose(0, 1).flip(0))
###Output
WELCOME EVERYBODY
###Markdown
Make pytorch Dataset for VAD task
###Code
vad_dataset = VadDataset(example['cuts'].cut_into_windows(10.0, keep_excessive_supervisions=True))
###Output
_____no_output_____
###Markdown
Illustation of an example
###Code
sample = vad_dataset[3]
label_height = 10
vad_label = torch.stack([sample['is_voice'] for i in range(label_height)]).reshape(label_height, 1000)
plt.matshow(vad_label)
plt.matshow(sample['features'].transpose(0, 1).flip(0))
###Output
_____no_output_____ |
tutorials/W2D3_BiologicalNeuronModels/W2D3_Outro.ipynb | ###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1uK411J7pF", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"pMxb9vA-_4U", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there is a small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/agpmt/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1uK411J7pF", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"pMxb9vA-_4U", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/agpmt/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1uK411J7pF", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"pMxb9vA-_4U", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/agpmt/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1uK411J7pF", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"pMxb9vA-_4U", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/agpmt/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D3_BiologicalNeuronModels/W2D3_Outro.ipynb) Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1uK411J7pF", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"pMxb9vA-_4U", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/agpmt/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____ |
notebooks/Dataset Analysis.ipynb | ###Markdown
Inter-Annotator Agreement
###Code
import sklearn
from sklearn.metrics import cohen_kappa_score
import statsmodels
from statsmodels.stats.inter_rater import fleiss_kappa
# from sklearn.metrics import cohen_kappa_score
# y_true = [2, 0, 2, 2, 0, 1]
# y_pred = [0, 0, 2, 2, 0, 2]
#
# from itertools import combinations
# pairs = list(combinations(range(5), 2))
###Output
_____no_output_____
###Markdown
Sentence-wise
###Code
# import modules
import matplotlib.pyplot as mp
import pandas as pd
import seaborn as sb
import numpy as np
def has_misspelling(val):
return int(len(val)==0)
mat = []
for p1 in range(5):
row = []
for p2 in range(5):
if p2 >= p1:
row.append(0)
continue
d1 = data[p1]
d2 = data[p2]
d = d1.merge(d2, on="data", how="inner")
m1 = d["label_x"].apply(has_misspelling)
m2 = d["label_y"].apply(has_misspelling)
row.append(cohen_kappa_score(m1, m2))
print(p1, p2, cohen_kappa_score(m1, m2), (m1!=m2).sum())
mat.append(row)
mask = np.triu(np.ones_like(mat))
# # plotting a triangle correlation heatmap
dataplot = sb.heatmap(mat, cmap="YlGnBu", annot=True, mask=mask, vmin=0, vmax=1)
fig = dataplot.get_figure()
# fig.savefig("../Figures/iaa1.png")
###Output
_____no_output_____
###Markdown
Token-wise
###Code
def get_segments(text, labels):
seg = []
pt = 0
labels = sorted(labels, key=lambda x: x[0], reverse=False)
for l in labels:
seg.append({"text": text[pt:l[0]], "misp": False, "int":False, "s": pt, "t":l[0]})
mint = (l[2]=='ตั้งใจ')
seg.append({"text": text[l[0]:l[1]], "misp": True, "int":mint, "s": l[0], "t":l[1]})
pt = l[1]
seg.append({"text": text[pt:], "misp": False, "int":False, "s": pt, "t":len(text)})
length = sum([len(s["text"]) for s in seg])
assert(length==len(text))
return seg
def overlap_segments(text, segments):
idx = set()
for seg in segments:
for s in seg:
idx.add(s["s"])
idx.add(s["t"])
idx = sorted(idx)
newseg = []
for i, _ in enumerate(idx):
if i==0:
continue
newseg.append({
"text": text[idx[i-1]:idx[i]],
"s": idx[i-1],
"t": idx[i],
})
o = []
for seg in segments:
ns = []
for s in newseg:
for ref in seg:
if s["s"] >= ref["s"] and s["s"] < ref["t"]:
break
ns.append({
"text": s["text"],
"misp": ref["misp"],
"int": ref["int"],
"s": s["s"],
"t": s["t"],
})
o.append(ns)
return o
mat = []
for p1 in range(5):
row = []
for p2 in range(5):
if p2 >= p1:
row.append(0)
continue
d1 = data[p1]
d2 = data[p2]
d = d1.merge(d2, on="data", how="inner")
s1 = []
s2 = []
for idx, sent in d.iterrows():
seg1 = get_segments(sent["data"], sent["label_x"])
seg2 = get_segments(sent["data"], sent["label_y"])
seg = overlap_segments(sent["data"], [seg1, seg2])
for s in seg[0]:
if not s["misp"]:
s1.append(0)
elif s["int"]:
s1.append(1)
else:
s1.append(2)
for s in seg[1]:
if not s["misp"]:
s2.append(0)
elif s["int"]:
s2.append(1)
else:
s2.append(2)
row.append(cohen_kappa_score(s1, s2))
print(p1, p2, cohen_kappa_score(s1, s2), (np.array(s1)!=np.array(s2)).sum()*100/len(s1))
mat.append(row)
mask = np.triu(np.ones_like(mat))
# # plotting a triangle correlation heatmap
dataplot = sb.heatmap(mat, cmap="YlGnBu", annot=True, mask=mask, vmin=0, vmax=1)
fig = dataplot.get_figure()
# fig.savefig("Figures/iaa2.png")
###Output
_____no_output_____
###Markdown
Intention Labelling Entropy across Sentences
###Code
from pythainlp.tokenize import word_tokenize
engine = "deepcut"
# word_tokenize("ฉันรักแมว", engine=engine)
reference = load_jsonl(f"{DIR}/missplling_train_wisesight_samples.jsonl")
reference = pd.DataFrame(reference)
O = reference
for i, d in enumerate(data):
d = d[["data", "label"]]
d.columns = ["text", f"misp{i}"]
O = O.merge(d, on="text", how="left")
import copy
from tqdm import tqdm
from itertools import groupby
def norm_word(word):
groups = [list(s) for _, s in groupby(word)]
ch = []
extraToken = ""
for g in groups:
if len(g)>=3:
extraToken = "<rep>"
ch.append(g[0])
word = "".join(ch)+extraToken
return word
def tolabel(n):
if n==2:
return "neg"
elif n==1:
return "neu"
elif n==0:
return "pos"
else:
raise(f"Unknow label: {n}")
merged = []
for idx, row in tqdm(O.iterrows(), total=len(O)):
segs = []
for i in range(5):
if pd.isna([row[f"misp{i}"]]).all():
seg = get_segments(row["text"], [])
else:
seg = get_segments(row["text"], row[f"misp{i}"])
segs.append(seg)
# seg2 = get_segments(sent["data"], sent["label_y"])
o = overlap_segments(row["text"], segs)
tokens = []
# if row["text"]=="อีดออกกก ฟังเพลงล้ะมีโฆษณาเอ็มเคชีสซิ๊ดเเซ่บคือเหี้ยใร":
# print(o)
for i in range(len(o[0])):
s = copy.copy(o[0][i])
mispProb = 0
intProb = 0
# assert(len(o)==5)
for j in range(len(o)):
if (o[j][i]["misp"]):
mispProb += 1/3
if (o[j][i]["int"]):
intProb += 1/3
assert(mispProb <= 1)
assert(intProb <= 1)
if (mispProb < 0.5) and (intProb < 0.5):
continue
s["int"] = intProb
s["msp"] = mispProb
if s["text"]=="ใร":
print(s, row["text"])
# s["misp"] = s["text"]
# del s["text"]
# s["int"] = (intProb > 0.5)
# s["tokens"] = word_tokenize(s["text"], engine=engine)
s["corr"] = None
tokens.append(s)
merged.append({
"text": row["text"],
"category": tolabel(row["meta"]["category"]),
"misp_tokens": tokens
})
merged = pd.DataFrame(merged)
# {"corr": "ไหม", "misp": "มั้ย", "int": true, "s": 67, "t": 71}
tokenized = load_jsonl(f"{DIR}/../tokenized_train-misp-3000.jsonl")
tokenized = pd.DataFrame(tokenized)
# merged
# tokenized
sents = merged.merge(tokenized[["text", "segments"]], on="text")
from collections import defaultdict
cnt = defaultdict(int)
cntmsp = defaultdict(list)
cntint = defaultdict(list)
def cal_entropy(labels):
s = pd.Series(labels)
counts = s.value_counts()
return entropy(counts)
for idx, sent in sents.iterrows():
for seg in sent["segments"]:
for w in seg[0]:
cnt[norm_word(w)] += 1
for m in sent["misp_tokens"]:
norm = norm_word(m["text"])
# cntmsp[norm].append(int(m["msp"] > 0.5))
cntint[norm].append(int(m["int"] > 0.5))
# cntint[norm].append(m["int"])
from scipy.stats import entropy
mispconsis = {}
for m in cntint:
if (cnt[m] < 5):
continue
mispconsis[m] = cal_entropy(cntint[m])
# mispconsis
mispconsis
values = {k: v for k, v in sorted(mispconsis.items(), key=lambda item: -item[1])}
x = []
y = []
for i, k in enumerate(values):
x.append(k)
y.append(values[k])
# print(k, values[k])
# if i > 10:
# break
# # import warnings
# # warnings.filterwarnings("ignore")
# import matplotlib.pyplot as plt
# import matplotlib.font_manager as fm
# font_dirs = ["../"]
# font_files = fm.findSystemFonts(fontpaths=font_dirs)
# for font_file in font_files:
# fm.fontManager.addfont(font_file)
# # set font
# import matplotlib.pyplot as plt
# plt.rcParams['font.family'] = 'TH Sarabun New'
# plt.rcParams['xtick.labelsize'] = 20.0
# plt.rcParams['ytick.labelsize'] = 20.0
# # mx = len(x)
# # plt.ylim(0, 1)
# # plt.xticks(rotation=90)
# # plt.rcParams["figure.figsize"] = (20,3)
# # plt.xticks([], [])
# # plt.bar(x[0:mx], y[0:mx])
# y
plt.rcParams["figure.figsize"] = (10,6)
plt.rcParams.update({'font.size': 25})
plt.ylabel("No. misspelt words")
plt.xlabel("Entropy of the annotated labels (intentional/unintentional)")
plt.hist(y, bins=10)
plt.savefig('../Figures/int_entropy.png')
# save_jsonl(merged, f"{DIR}/train-misp-3000.jsonl")
# from collections import defaultdict
# labels = defaultdict(int)
# for sent in unified:
# labels[sent["category"]] += 1
# # break
# labels
###Output
_____no_output_____
###Markdown
Term Frequency
###Code
from collections import defaultdict
cnt = defaultdict(int)
misp = defaultdict(int)
for idx, sent in sents.iterrows():
mispFound = False
for seg in sent["segments"]:
for m, c in zip(seg[0], seg[1]):
if m!=c:
mispFound = True
cnt["misp"] += 1
n = norm_word(m)
misp[n] += 1
cnt["token"] += 1
if mispFound:
cnt["sent"] += 1
# break?
print("%Mispelling Sentence:", cnt["sent"]*100/len(sents), cnt["sent"])
print()
print("#Misspelling:", cnt["misp"])
print("%Misspelling:", cnt["misp"]*100/cnt["token"])
print("#Unique Misspelling Tokens:", len(misp))
# from transformers import XLMRobertaTokenizerFast
# tokenizer = XLMRobertaTokenizerFast.from_pretrained('xlm-roberta-base')
# import itertools
# cnt["misp_sub"] = 0
# cnt["corr_sub"] = 0
# unk = tokenizer.convert_tokens_to_ids(["<unk>"])[0]
# for sent in trainmisp:
# s = [list(zip(seg[0], seg[1])) for seg in sent["segments"]]
# tokens = list(itertools.chain(*s))
# # misptokens = [t[0] for t in tokens]
# # midx = tokenizer.convert_tokens_to_ids(misptokens)
# # for i in range(len(midx)):
# # if midx[i]==unk:
# # t = tokenizer.tokenize("_"+misptokens[i])[1:]
# # cnt["misp_sub"] += len(t)
# # else:
# # cnt["misp_sub"] += 1
# cnt["misp_sub"] += len(tokens)
# cnt["corr_sub"] += len(tokenizer.tokenize(sent["text"]))
# # break
# print("%Subtokens Different", abs(cnt["misp_sub"]-cnt["corr_sub"])*100/cnt["corr_sub"])
###Output
_____no_output_____
###Markdown
Most common misspelling words
###Code
sortedmisp = {k: v for k, v in sorted(misp.items(), key=lambda item: -item[1])}
for i, k in enumerate(sortedmisp):
print(k, sortedmisp[k])
if i > 10:
break
x = [x for x in sortedmisp]
y = [sortedmisp[k] for k in sortedmisp]
mint = [int(np.average(cntint[k]) > 0.5) for k in sortedmisp]
c = ['b' if i==1 else 'r' for i in mint]
mx = 100
# plt.ylim(0, 1.2)
# plt.xticks(rotation=90)
# plt.rcParams["figure.figsize"] = (20,3)
plt.xticks([], [])
plt.bar(x[0:mx], y[0:mx], color=c[0:mx])
plt.ylabel("Frequency")
plt.xlabel(f"\nMisspelling terms (top {mx})")
plt.savefig("../Figures/tf.png")
# _x = np.array(range(len(x)))
# _y = (np.full(len(x), y[0]))/(_x+1)
# plt.plot(_x[0:mx], _y[0:mx], "r")
print("Common mispelling")
cc = 0
obs = 0
for i in range(len(x)):
if cc <= 20:
print(cc, x[i], y[i], 'int' if mint[i]==1 else 'un')
cc += 1
obs += y[i]
print("Common intentional mispelling")
cc = 0
obs = 0
for i in range(len(x)):
if mint[i]==1:
if cc <= 15:
print(cc, x[i], y[i])
cc += 1
obs += y[i]
print("#Intentional Words:", cc, obs)
print("Common unintentional mispelling")
cc = 0
obs = 0
for i in range(len(x)):
if mint[i]!=1:
if cc <= 10:
print(x[i], y[i])
cc += 1
obs += y[i]
print("#Unintentional Words:", cc, obs)
###Output
Common unintentional mispelling
ค่ะ 16
คะ 11
จ่ะ 4
แล้ว 3
อ้ะ 3
นะค่ะ 2
น่ะ 2
ไม 2
ฟิน 2
นะ 2
หร่อย 2
#Unintentional Words: 156 202
###Markdown
Sentiment Class and Misspelling
###Code
from collections import defaultdict
from itertools import groupby
labels = defaultdict(int)
for idx, sent in sents.iterrows():
mispFound = False
for seg in sent["segments"]:
for m, c in zip(seg[0], seg[1]):
if m!=c:
mispFound = True
if mispFound:
labels[sent["category"]] += 1
for l in labels:
print(l, labels[l], labels[l]/cnt["sent"])
###Output
neg 382 0.39340885684860966
pos 346 0.35633367662203913
neu 243 0.25025746652935116
###Markdown
Extraction of Real and Sythetic RIRs
###Code
rirs_real = np.zeros([L, I, J, D])
rirs_synt = np.zeros([L, I, J, D])
mics = np.zeros([3, I])
srcs = np.zeros([3, J])
for d in tqdm(range(D), desc='Loop datasets'):
for i in tqdm(range(I), desc='Lood mic', leave=False):
for j in range(J):
dataset_id = datasets[d]
# get rir from the recondings
dset.set_dataset(dataset_id)
dset.set_entry(i, j)
mic, src = dset.get_mic_and_src_pos()
mics[:, i] = mic
srcs[:, j] = src
_, rrir = dset.get_rir()
# get synthetic rir
sdset = SyntheticDataset()
sdset.set_room_size(constants['room_size'])
sdset.set_dataset(dataset_id, absb=0.85, refl=0.15)
sdset.set_c(c)
sdset.set_k_order(17)
sdset.set_mic(mics[0, i], mics[1, i], mics[2, i])
sdset.set_src(srcs[0, j], srcs[1, j], srcs[2, j])
_, srir = sdset.get_rir()
Ls = len(srir)
# measure after calibration
rirs_real[:, i, j, d] = rrir[:L]
rirs_synt[:Ls, i, j, d] = srir[:Ls]
print('done with the extraction')
###Output
_____no_output_____
###Markdown
Comptutation of RT60 and DRR Computation of RT60as in this post Computation of DRRas in this post
###Code
from dechorate.utils.acu_utils import rt60_from_rirs, rt60_with_sabine, ddr_from_rir
df = pd.DataFrame()
counter = 0
for d in tqdm(range(D)):
for j in range(J):
for i in range(I):
rrir = normalize(rirs_real[:, i, j, d])
srir = normalize(rirs_synt[:, i, j, d])
rt60_real = rt60_from_rirs(rrir, Fs, M=100, snr=45, do_shroeder=True, val_min=-90)
rt60_synt = rt60_from_rirs(srir, Fs, M=100, snr=45, do_shroeder=True, val_min=-90)
dist = np.linalg.norm(mics[:,i] - srcs[:,j])
# print(dist / c)
n0 = int(Fs * dist / c)
assert c > 0
ddr_real = ddr_from_rir(rrir, n0, Fs)
# print(ddr_real)
ddr_synt = ddr_from_rir(srir, n0, Fs)
# print(ddr_synt)
# print(n0)
# plt.plot(np.arange(L)/Fs, np.abs(rrir))
# plt.plot(np.arange(L)/Fs, np.abs(srir))
# plt.plot(n0/Fs, 1, 'o')
# plt.plot((n0+120)/Fs, 1, 'o')
# plt.xlim(0, 0.01)
# plt.show()
sdset = SyntheticDataset()
sdset.set_room_size(constants['room_size'])
sdset.set_dataset(dataset_id, absb=0.85, refl=0.15)
rt60_sabi = rt60_with_sabine(constants['room_size'], sdset.absorption)
df.at[counter, 'd'] = datasets[d]
df.at[counter, 'mic_id'] = i
df.at[counter, 'arr_id'] = i // 5
df.at[counter, 'src_id'] = j
df.at[counter, 'ddr_real'] = ddr_real
df.at[counter, 'ddr_synt'] = ddr_synt
df.at[counter, 'rt60_real'] = rt60_real
df.at[counter, 'rt60_synt'] = rt60_synt
df.at[counter, 'rt60_sabi'] = rt60_sabi
counter += 1
df
df.to_csv(data_dir + 'processed/ddr_rt60.csv')
plt.figure()
x = 'd'
y = 'rt60_real'
hue = None
col = None
row = None
sns.boxplot(x=x, y=y, hue=hue, data=df)
y = 'rt60_synt'
sns.boxplot(x=x, y=y, hue=hue, data=df)
plt.figure()
x = 'arr_id'
y = 'ddr_real'
hue = None
col = None
row = None
sns.boxplot(x=x, y=y, hue=hue, data=df)
y = 'ddr_synt'
sns.boxplot(x=x, y=y, hue=hue, data=df)
###Output
_____no_output_____
###Markdown
Dataset Info===This notebook displays summary info about the dataset. Specifically it looks at: - **Segment Label Distributions:** The vocabulary of and frequency at which segment labels occurs - **Segment Length Distributions:** The distribution of lengths of various segments in the dataset - **Beat / Tempo Distributions:** The lengths of beats per track and across tracks in the dataset - **Time Signature Distributions:** The number of beats within each bar within each track and across tracks - **Time Signature / Tempo Change Frequency:** The frequency of multimodal tempos and beat per bar numbers within each track - Still to do. - **Segmentation Experiments:** Running MSAF algorithms with different sets of beat annotations/estimations. Helper Functions===
###Code
# Define Plotting Function
def dict_to_bar(data, title, xlabel, ylabel, fontsize=10, alpha=1, label="plot",
color="blue", edgecolor="blue"):
data = collections.OrderedDict(sorted(data.items()))
idxs = np.arange(len(data.values()))
plt.bar(idxs, data.values(), alpha=alpha, label=label,
color=color, edgecolor=edgecolor)
ax = plt.axes()
ax.set_xticks(idxs)
ax.set_xticklabels(data.keys())
plt.ylabel(ylabel)
plt.xlabel(xlabel)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(fontsize)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(fontsize)
###Output
_____no_output_____
###Markdown
Reading in Data===
###Code
import os
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
# Configure pandas display
pd.options.display.max_rows = 5
# Define dataset info
DATA_DIR = os.path.abspath('../dataset/')
BEAT_DIR = os.path.join(DATA_DIR, 'beats_and_downbeats')
BEAT_MARKER_COLUMN = 'BeatMarker'
BEAT_NUMBER_COLUMN = 'BeatNumber'
BAR_NUMBER_COLUMN = 'BarNumber'
BEATS_COLUMNS = [BEAT_MARKER_COLUMN, BEAT_NUMBER_COLUMN, BAR_NUMBER_COLUMN]
SEGMENT_DIR = os.path.join(DATA_DIR, 'segments')
SEG_BOUNDARY_COLUMN = 'SegmentStart'
SEG_LABEL_COLUMN = 'SegmentLabel'
SEGMENTS_COLUMNS = [SEG_BOUNDARY_COLUMN, SEG_LABEL_COLUMN]
# Load entire dataset into memory
beat_files = [os.path.join(BEAT_DIR, fname) for fname in os.listdir(BEAT_DIR)]
seg_files = [os.path.join(SEGMENT_DIR, fname) for fname in os.listdir(SEGMENT_DIR)]
beat_data = {os.path.basename(fname):pd.read_csv(fname, names=BEATS_COLUMNS, delimiter='\t') for fname in beat_files}
seg_data = {os.path.basename(fname):pd.read_csv(fname, names=SEGMENTS_COLUMNS, delimiter=' ') for fname in seg_files}
# Display some data
print("Tempo Data Sample:")
print("===")
print(" ")
for key, value in list(beat_data.items())[:1]:
print(key)
display(value)
print(" ")
print(" ")
print(" ")
print("Segmentation Data Sample:")
print("===")
print(" ")
for key, value in list(seg_data.items())[:1]:
print(key)
display(value)
%matplotlib inline
###Output
Tempo Data Sample:
===
0460_numb.txt
###Markdown
Distribution of Segment Labels and Segment Counts===
###Code
import numpy as np
import matplotlib.pyplot as plt
import collections
# Label Vocabulary
labels = set()
for _, df in seg_data.items():
labels |= set(df[SEG_LABEL_COLUMN])
print('All Labels:')
print(labels)
print(' ')
# Label Counts
label_dict = {label: 0 for label in labels}
for _, df in seg_data.items():
for item in df[SEG_LABEL_COLUMN]:
label_dict[item] += 1
print("Number Unique Labels:", len(label_dict.keys()))
filtered_label_dict = {key: value for key, value in label_dict.items() if value > 50 and key != 'section'} # <= Filter out labels that only occur once
plt.figure(figsize=(6, 3))
dict_to_bar(filtered_label_dict,
'Frequency of Segment Labels',
'',
'Frequency',
color='turquoise',
edgecolor='purple',
label='All Segments')
plt.xticks(rotation=60)
plt.tight_layout()
plt.savefig('../results/SegmentLabels_distribution.pdf')
# Segments per Track
num_segs = collections.defaultdict(int)
for _, df in seg_data.items():
num = len(df.index) - 1 # <= Every track has an extra "end" label marking the end, not start, of a segment.
num_segs[num] += 1
print("Counts of tracks with number of segments: ", num_segs)
num_segs = {count:num_segs[count] for count in range(max(num_segs.keys()))}
plt.figure(figsize=(6, 3))
dict_to_bar(num_segs,
'Distribution of Total Number of Segments per Track',
'Number Segments in Track',
'Number Tracks',
alpha=0.8,
color='turquoise',
edgecolor='purple',
label='All Segments')
# Number of Labels per Track
unique_labels = collections.defaultdict(int)
for _, df in seg_data.items():
labels = set(df[SEG_LABEL_COLUMN])
unique_labels[len(labels)] += 1
unique_labels = {count:unique_labels[count] for count in range(max(unique_labels.keys()))}
dict_to_bar(unique_labels,
'Unique Labels per Track in Dataset',
'Segment count',
'Number of tracks',
alpha=0.7,
color='purple',
edgecolor='purple',
label='Unique Segments')
plt.tight_layout()
plt.legend()
plt.savefig('../results/segment_label_count.pdf')
###Output
All Labels:
{'solo3', 'transition1', 'chrous', 'outro2', 'intro3', 'section16', 'solo', 'outro3', 'verse', 'bre', 'break2', 'prechrous', 'verse11', 'instrumental', 'raps', 'intro7', 'choruspart', 'outro', 'verse9', 'instrumental3', 'verse3', 'chorus', 'instchorus', 'section', 'chorushalf', 'intropt2', 'intro', 'verseinst', 'introchorus', 'instrumentalverse', 'chorusinst', 'prechorus3', 'section11', 'prechorus5', 'section4', 'solo2', 'transition', 'breakdown2', 'intro6', 'section2', 'bridge2', 'transtiion', 'bigoutro', 'inst', 'bridge1', 'section14', 'gtr', 'intro8', 'end', 'intchorus', 'verse4', 'break1', 'section7', 'isnt', 'worstthingever', 'tranisition', 'saxobeat', 'intro2', 'section1', 'altchorus', 'chorus1', 'opening', 'instintro', 'prechorus', 'intro5', 'prechorus2', 'refrain', 'intro4', 'section13', 'slow', 'section8', 'build', 'quiet', 'breakdown', 'introverse', 'instrumental2', 'guitar', 'bridge3', 'preverse', 'verse8', 'break', 'instbridge', 'postchorus2', 'prehorus', 'section10', 'silence', 'oddriff', 'verse_slow', 'section17', 'stutter', 'transition3', 'chorus3', 'verse1', 'vocaloutro', 'synth', 'verse1a', 'prechors', 'bridge', 'versepart', 'inrto', 'verse7', 'inst2', 'outroa', 'drumroll', 'slow2', 'guitarsolo', 'break3', 'gtrbreak', 'chrous2', 'verse2', 'verse10', 'miniverse', 'section6', 'verse5', 'postchorus', 'vese', 'section3', 'gtr2', 'mainriff2', 'verse6', 'section9', 'fast', 'postverse', 'section15', 'section12', 'rhythmlessintro', 'quietchorus', 'mainriff', 'section5', 'outro1', 'transition2a', 'chorus2', 'transition2', 'fadein', 'slowverse', 'chorus_instrumental'}
Number Unique Labels: 136
Counts of tracks with number of segments: defaultdict(<class 'int'>, {11: 135, 8: 114, 9: 135, 17: 7, 10: 147, 12: 105, 15: 30, 13: 64, 7: 71, 6: 21, 14: 38, 16: 17, 18: 7, 19: 5, 5: 4, 28: 1, 22: 4, 20: 4, 24: 1, 25: 1, 21: 1})
###Markdown
Segment Length Distributions===
###Code
# Plot distribution of length of all segments
lengths = []
for c, df in seg_data.items():
this_track_lens = np.array(df[SEG_BOUNDARY_COLUMN])
lengths += [this_track_lens[1:] - this_track_lens[:-1]]
lengths = np.concatenate(lengths)
bins = 100
plt.figure(figsize=(6, 3))
plt.hist(lengths, bins, color='turquoise', edgecolor='purple')
plt.xlabel('Segment Length (s)')
plt.ylabel('Number of Segments')
plt.xlim((0, 80))
plt.tight_layout()
plt.savefig('../results/SegmentLength_distribution.pdf')
###Output
_____no_output_____
###Markdown
Beat / Tempo Distribution===
###Code
# Plot distribution of beat lengths across entire dataset
color = "turquoise"
edgecolor = "purple"
lengths = []
for _, df in beat_data.items():
this_track_lens = np.array(df[BEAT_MARKER_COLUMN])
lengths += [this_track_lens[1:] - this_track_lens[:-1]]
all_lengths = np.concatenate(lengths)
bins = 30
plt.hist(all_lengths, bins, color=color, edgecolor=edgecolor)
plt.title('Distribution of Beat Lengths Across all Tracks')
plt.xlabel('Beat Length (s)')
plt.ylabel('Frequency Within Beat Length Range')
# Plot distribution of variance in beat length within each track
beat_stds = np.array([np.std(track_beat_lens) for track_beat_lens in lengths])
bins = 200
plt.figure(figsize=(6, 2.5))
plt.hist(beat_stds, bins, align='mid', color=color, edgecolor=edgecolor)
# plt.title('Distribution of Beat Length Standard Deviation Within Each Track')
plt.xlabel('Beat Length Standard Deviation (s)')
plt.ylabel('Number of Tracks')
plt.xlim((0, 0.015))
plt.tight_layout()
plt.savefig('../results/BPM_std.pdf')
# Plot distribution of tempo (taken as median beat length for each track)
tempos = np.array([60.0/np.median(track_beat_lens) for track_beat_lens in lengths])
bins = 200
plt.figure(figsize=((7, 3)))
plt.hist(tempos, bins, color=color, edgecolor=edgecolor)
plt.xlabel('BPM')
plt.ylabel('Number of Tracks')
plt.xticks(np.arange(0, 250, 10))
plt.xlim((50, 220))
plt.tight_layout()
plt.savefig('../results/BPM_distribution.pdf')
###Output
_____no_output_____
###Markdown
Time Signature Distribution===
###Code
# Plot the distribution of beats per bar aggregated across tracks
beats_per_bar_each_track = []
for _, df in beat_data.items():
bar_numbers = np.array(df[BAR_NUMBER_COLUMN])
bar_end_idxs = np.argwhere((bar_numbers[1:]-bar_numbers[:-1])>0) # <= We ignore the last bar as it is usually incomplete - e.g., the final beat
beat_numbers = np.array(df[BEAT_NUMBER_COLUMN])
beats_per_bar_each_track += [beat_numbers[bar_end_idxs]]
all_bars = np.concatenate(beats_per_bar_each_track)
bins = 29
hist_range = (np.min(all_bars)-0.5, np.max(all_bars)+0.5)
plt.hist(all_bars, bins, range=hist_range)
plt.title('Distribution of Beats Per Bar Across all Tracks')
plt.xlabel('Bar Length (beats)')
plt.ylabel('Frequency Within Bar Length Range')
# Plot the distribution of beats per bar per track
time_sigs = np.array([np.median(track_bar_lens) for track_bar_lens in beats_per_bar_each_track])
bins = 29
hist_range = (np.min(all_bars)-0.5, np.max(all_bars)+0.5)
plt.figure()
plt.hist(time_sigs, bins, range=hist_range)
plt.title('Distribution of Beats per Bar per Track in Dataset')
plt.xlabel('Median Bar Length (beats)')
plt.ylabel('Number of Tracks')
# Find the beat index for each segment
seg_start_beats = []
for beat_df, seg_df in zip(beat_data.values(), seg_data.values()):
seg_start_beats += [[]]
for seg_time in seg_df['SegmentStart']:
seg_start_beats[-1] += [np.argmin(np.abs(np.array(beat_df['BeatMarker'].values) - seg_time))]
beat_idxs = dict.fromkeys(['1','2','3','4','5','6'], 0)
for seg_start_idxs, beat_df in zip(seg_start_beats, beat_data.values()):
for idx in seg_start_idxs:
beat_idxs[str(beat_df['BeatNumber'].values[idx])] += 1
# Plot the counts for the beat number on which each of the segments start
plt.figure(figsize=(6, 3))
dict_to_bar(beat_idxs,
'Beat Number of Segment Beginning',
'Beat Number',
'Number of Segments',
color='turquoise',
edgecolor='purple')
plt.tight_layout()
plt.xlim((-0.5, 4.5))
plt.savefig('../results/Downbeat_Segment_Alignment.pdf')
###Output
/Users/mmccallum/Envs/Master3/lib/python3.7/site-packages/ipykernel_launcher.py:8: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
###Markdown
Segmentation Data
###Code
import seaborn as sns
sns.set_style("darkgrid")
SEGMENTATION_RESULTS_KWSKI = "../results/korz_beats.csv"
SEGMENTATION_RESULTS_ANNB = "../results/annot_beats.csv"
SEGMENTATION_RESULTS_LIBROSA = "../results/librosa_beats.csv"
kwski_df = pd.read_csv(SEGMENTATION_RESULTS_KWSKI)
librosa_df = pd.read_csv(SEGMENTATION_RESULTS_LIBROSA)
ann_df = pd.read_csv(SEGMENTATION_RESULTS_ANNB)
fig, ax = plt.subplots()
c1 = "turquoise"
box1 = ax.boxplot((librosa_df["HitRate_0.5F"],
librosa_df["HitRate_3F"],
librosa_df["PWF"],
librosa_df["Sf"]),
positions=[1, 5, 9, 13],
notch=True, patch_artist=True,
boxprops=dict(facecolor=c1, color="purple"),
capprops=dict(color=c1),
whiskerprops=dict(color=c1),
flierprops=dict(color=c1, markeredgecolor=c1),
medianprops=dict(color=c1))
c2 = "orchid"
box2 = ax.boxplot((kwski_df["HitRate_0.5F"],
kwski_df["HitRate_3F"],
kwski_df["PWF"],
kwski_df["Sf"]),
positions=[2, 6, 10, 14],
notch=True, patch_artist=True,
boxprops=dict(facecolor=c2, color="purple"),
capprops=dict(color=c2),
whiskerprops=dict(color=c2),
flierprops=dict(color=c2, markeredgecolor=c2),
medianprops=dict(color=c2))
c3 = "purple"
box3 = ax.boxplot((ann_df["HitRate_0.5F"],
ann_df["HitRate_3F"],
ann_df["PWF"],
ann_df["Sf"]),
positions=[3, 7, 11, 15],
notch=True, patch_artist=True,
boxprops=dict(facecolor=c3, color=c3),
capprops=dict(color=c3),
whiskerprops=dict(color=c3),
flierprops=dict(color=c3, markeredgecolor=c3),
medianprops=dict(color=c3))
for item in ['boxes', 'whiskers', 'fliers', 'medians', 'caps']:
plt.setp(box3[item], color=c3)
plt.setp(box3["boxes"], facecolor=c3)
plt.setp(box3["fliers"], markeredgecolor=c3)
ax.legend([box1["boxes"][0], box2["boxes"][0], box3["boxes"][0]],
['Ellis Beats', 'Korzeniowski Beats', 'Annotated Beats'],
loc='lower right')
plt.xticks([2, 6, 10, 14], ["Hit [email protected]", "Hit Rate@3", "Pairwise Clust.", "Entropy Scores"])
plt.xlim(0, 16)
plt.ylim(-0.05, 1)
plt.ylabel("F-measures")
plt.tight_layout()
plt.savefig("../paper/figs/segment_results.pdf")
###Output
_____no_output_____ |
Libraries/oneDNN/tutorials/tutorial_verbose_jitdump.ipynb | ###Markdown
Profile Intel® oneAPI Deep Neural Network Library (oneDNN) Samples by using Verobse Mode and JIT DUMP inspection Learning ObjectivesIn this module the developer will:* Learn how to use Verbose Mode to profile oneDNN samples on CPU & GPU* Learn how to inspect JIT Dump to profile oneDNN samples on CPU This module shows the elapsed time percentage over different oneDNN primitives This module also shows the elapsed time percentage over different oneDNN JIT or GPU kernels *** Verbose Mode Exercise prerequisites*** Step 1: Prepare the build/run environmentoneDNN has four different configurations inside the Intel oneAPI toolkits. Each configuration is in a different folder under the oneDNN installation path, and each configuration supports a different compiler or threading library Set the installation path of your oneAPI toolkit
###Code
# default path: /opt/intel/oneapi
%env ONEAPI_INSTALL=/opt/intel/oneapi
!printf '%s\n' $ONEAPI_INSTALL/dnnl/latest/cpu_*
###Output
_____no_output_____
###Markdown
As you can see, there are four different folders under the oneDNN installation path, and each of those configurations supports different features. This tutorial will use the dpcpp configuration to showcase the verbose log for both CPU and GPU. Create a lab folder for this exercise.
###Code
!mkdir -p lab
###Output
_____no_output_____
###Markdown
Install required python packages.
###Code
!pip3 install -r requirements.txt
###Output
_____no_output_____
###Markdown
Get current platform information for this exercise.
###Code
from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info()
###Output
_____no_output_____
###Markdown
Step 2: Preparing the samples codeThis exercise uses the cnn_inference_f32.cpp example from oneDNN installation path.The section below will copy the cnn_inference_f32.cpp file into the lab folder. This section also copies the required header files and CMake file into the lab folder.
###Code
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/cnn_inference_f32.cpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.hpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.h lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/CMakeLists.txt lab/
###Output
_____no_output_____
###Markdown
Step 3: Build and Run with the oneAPI DPC++ Compiler One of the oneDNN configurations supports the oneAPI DPC++ compiler, and it can run on different architectures by using DPC++.The following section shows you how to build with DPC++ and run on different architectures. Script - build.shThe script **build.sh** encapsulates the compiler **dpcpp** command and flags that will generate the exectuable.To enable use of the DPC++ compiler and the related SYCL runtime, some definitions must be passed as cmake arguments.Here are the related cmake arguments for the DPC++ configuration: -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
###Code
%%writefile build.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force> /dev/null 2>&1
export EXAMPLE_ROOT=./lab/
mkdir dpcpp
cd dpcpp
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
make cnn-inference-f32-cpp
###Output
_____no_output_____
###Markdown
Once you achieve an all-clear from your compilation, you execute your program on the DevCloud or a local machine. Script - run.shThe script **run.sh** encapsulates the program for submission to the job queue for execution.By default, the built program uses CPU as the execution engine, but the user can switch to GPU by specifying the input argument "gpu".The user can refer to run.sh below to run cnn-inference-f32-cpp on both CPU and GPU.
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=0
./dpcpp/out/cnn-inference-f32-cpp cpu
./dpcpp/out/cnn-inference-f32-cpp gpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! rm -rf dpcpp;chmod 755 q; chmod 755 build.sh; chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q build.sh; ./q run.sh; else ./build.sh; ./run.sh; fi
###Output
_____no_output_____
###Markdown
Enable Verbose Mode***In this section, we enable verbose mode on the built sample from the previous section, and users can see different results from CPU and GPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) for detailed verbose mode information When the feature is enabled at build-time, you can use the DNNL_VERBOSE environment variable to turn verbose mode on and control the level of verbosity.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE| 0 |no verbose output (default)|||1|primitive information at execution|||2|primitive information at creation and execution| prepare run.sh and enable DNNL_VERBOSE as 2
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=2
./dpcpp/out/cnn-inference-f32-cpp cpu >>log_cpu_f32_vb2.csv 2>&1
./dpcpp/out/cnn-inference-f32-cpp gpu >>log_gpu_f32_vb2.csv 2>&1
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Analyze Verbose Logs*** Step 1: List out all oneDNN verbose logsusers should see two verbose logs listed in the table below.|Log File Name | Description ||:-----|:----||log_cpu_f32_vb2.csv| log for cpu run ||log_cpu_f32_vb2.csv| log for gpu run|
###Code
import os
filenames= os.listdir (".")
result = []
keyword = ".csv"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 2: Pick a verbose log by putting its index value belowUsers can pick either cpu or gpu log for analysis. Once users finish Step 2 to Step 8 for one log file, they can go back to step 2 and select another log file for analysis.
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
OPTIONAL: browse the content of selected verbose log.
###Code
logfile = result[FdIndex]
with open(logfile) as f:
log = f.read()
print(log)
###Output
_____no_output_____
###Markdown
Step 3: Parse verbose log and get the data back
###Code
logfile = result[FdIndex]
print(logfile)
from profiling.profile_utils import oneDNNUtils, oneDNNLog
onednn = oneDNNUtils()
log1 = oneDNNLog()
log1.load_log(logfile)
data = log1.data
exec_data = log1.exec_data
###Output
_____no_output_____
###Markdown
Step 4: Time breakdown for exec typeThe exec type includes exec and create. |exec type | Description | |:-----|:----| |exec | Time for primitives exection. Better to spend most of time on primitives execution. | |create| Time for primitives creation. Primitives creation happens once. Better to spend less time on primitive creation. |
###Code
onednn.breakdown(data,"exec","time")
###Output
_____no_output_____
###Markdown
Step 5: Time breakdown for primitives typeThe primitives type includes convolution, reorder, sum, etc. For this simple convolution net example, convolution and inner product primitives are expected to spend most of time. However, the exact time percentage of different primitivies may vary among different architectures. Users can easily identify top hotpots of primitives executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"type","time")
###Output
_____no_output_____
###Markdown
Step 6: Time breakdown for JIT kernel type oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and instruction set supported by the system. Therefore, users can see different JIT kernel type among different CPU and GPU architectures. For example, users can see avx_core_vnni JIT kernel if the workload uses VNNI instruction on Cascake Lake platform. Users can also see different OCL kernels among different Intel GPU generations. Moreover, users can identify the top hotspots of JIT kernel executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"jit","time")
###Output
_____no_output_____
###Markdown
Step 7: Time breakdown for algorithm typeoneDNN also supports different algorithms. Users can identify the top hotspots of algorthm executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"alg","time")
###Output
_____no_output_____
###Markdown
Step 8: Time breakdown for architecture typeThe supported architectures include CPU and GPU. For this simple net sample, we don't split computation among CPU and GPU, so users should see either 100% CPU time or 100% GPU time.
###Code
onednn.breakdown(data,"arch","time")
###Output
_____no_output_____
###Markdown
*** Inspecting JIT CodeIn this section, we dump JIT code on the built sample from the previous section, and users can see different results from CPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_inspecting_jit.html) for detailed JIT Dump information When the feature is enabled at build-time, you can use the DNNL_JIT_DUMP environment variable to inspect JIT code.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_JIT_DUMP | 0 |JIT dump is disabled (default)|||any other value|JIT dump is enabled| Step 1: Prepare run.sh and enable DNNL_JIT_DUMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# disable verbose log
export DNNL_VERBOSE=0
# enable JIT Dump
export DNNL_JIT_DUMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
Step 2: Submitting ***run.sh** to the job queueNow we can submit **run.sh** to the job queue.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Step 3: Move all JIT Dump files into the jitdump folder
###Code
!mkdir jitdump;mv *.bin jitdump
###Output
_____no_output_____
###Markdown
Step 4: List out all oneDNN JIT Dump files
###Code
import os
filenames= os.listdir ("jitdump")
result = []
keyword = ".bin"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 5: Pick a JIT Dump file by putting its index value below
###Code
FdIndex=2
###Output
_____no_output_____
###Markdown
Step 6: export JIT Dump file to environment variable JITFILE
###Code
logfile = result[FdIndex]
os.environ["JITFILE"] = logfile
###Output
_____no_output_____
###Markdown
Step 7: disassembler JIT Dump file to view the code> NOTE: If the oneDNN sample uses VNNI instruction, users should be able to see "vpdpbusd" instruction from the JIT Dump file > NOTE: If the oneDNN sample uses BF16 instruction, users should see usage of vdpbf16ps or vcvtne2ps2bf16 in the JIT dump file. > NOTE: For disassembler vdpbf16ps and vcvtne2ps2bf16 instructions, users must use objdump with v2.34 or above.
###Code
!objdump -D -b binary -mi386:x86-64 jitdump/$JITFILE
###Output
_____no_output_____
###Markdown
Profile Intel® oneAPI Deep Neural Network Library (oneDNN) Samples by using Verobse Mode and JIT DUMP inspection Learning ObjectivesIn this module the developer will:* Learn how to use Verbose Mode to profile oneDNN samples on CPU & GPU* Learn how to inspect JIT Dump to profile oneDNN samples on CPU This module shows the elapsed time percentage over different oneDNN primitives This module also shows the elapsed time percentage over different oneDNN JIT or GPU kernels *** Verbose Mode Exercise prerequisites*** Step 1: Prepare the build/run environmentoneDNN has four different configurations inside the Intel oneAPI toolkits. Each configuration is in a different folder under the oneDNN installation path, and each configuration supports a different compiler or threading library Set the installation path of your oneAPI toolkit
###Code
# ignore all warning messages
import warnings
warnings.filterwarnings('ignore')
%autosave 0
# default path: /opt/intel/oneapi
%env ONEAPI_INSTALL=/opt/intel/oneapi
import os
if os.path.isdir(os.environ['ONEAPI_INSTALL']) == False:
print("ERROR! wrong oneAPI installation path")
!printf '%s\n' $ONEAPI_INSTALL/dnnl/latest/cpu_*
###Output
_____no_output_____
###Markdown
As you can see, there are four different folders under the oneDNN installation path, and each of those configurations supports different features. This tutorial will use the dpcpp configuration to showcase the verbose log for both CPU and GPU. Create a lab folder for this exercise.
###Code
!rm -rf lab;mkdir -p lab
###Output
_____no_output_____
###Markdown
Install required python packages.
###Code
!pip3 install -r requirements.txt
###Output
_____no_output_____
###Markdown
Get current platform information for this exercise.
###Code
from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info()
###Output
_____no_output_____
###Markdown
Step 2: Preparing the samples codeThis exercise uses the cnn_inference_f32.cpp example from oneDNN installation path.The section below will copy the cnn_inference_f32.cpp file into the lab folder. This section also copies the required header files and CMake file into the lab folder.
###Code
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/cnn_inference_f32.cpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.hpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.h lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/dpcpp_driver_check.cmake lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/CMakeLists.txt lab/
###Output
_____no_output_____
###Markdown
Step 3: Build and Run with the oneAPI DPC++ Compiler One of the oneDNN configurations supports the oneAPI DPC++ compiler, and it can run on different architectures by using DPC++.The following section shows you how to build with DPC++ and run on different architectures. Script - build.shThe script **build.sh** encapsulates the compiler **dpcpp** command and flags that will generate the exectuable.To enable use of the DPC++ compiler and the related SYCL runtime, some definitions must be passed as cmake arguments.Here are the related cmake arguments for the DPC++ configuration: -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
###Code
%%writefile build.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force> /dev/null 2>&1
export EXAMPLE_ROOT=./lab/
mkdir dpcpp
cd dpcpp
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
make cnn-inference-f32-cpp
###Output
_____no_output_____
###Markdown
Once you achieve an all-clear from your compilation, you execute your program on the DevCloud or a local machine. Script - run.shThe script **run.sh** encapsulates the program for submission to the job queue for execution.By default, the built program uses CPU as the execution engine, but the user can switch to GPU by specifying the input argument "gpu".The user can refer to run.sh below to run cnn-inference-f32-cpp on both CPU and GPU.
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=0
./dpcpp/out/cnn-inference-f32-cpp cpu
./dpcpp/out/cnn-inference-f32-cpp gpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in both build.sh and run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('build.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! rm -rf dpcpp;chmod 755 q; chmod 755 build.sh; chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q build.sh; ./q run.sh; else ./build.sh; ./run.sh; fi
###Output
_____no_output_____
###Markdown
Enable Verbose Mode***In this section, we enable verbose mode on the built sample from the previous section, and users can see different results from CPU and GPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) for detailed verbose mode information When the feature is enabled at build-time, you can use the DNNL_VERBOSE environment variable to turn verbose mode on and control the level of verbosity.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE| 0 |no verbose output (default)|||1|primitive information at execution|||2|primitive information at creation and execution| Users could also enable DNNL_VERBOSE with timestamps by DNNL_VERBOSE_TIMESTAMP environment variable. |Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE_TIMESTAMP| 0 |display timestamps disabled (default)|||1| display timestamps enabled| prepare run.sh and enable DNNL_VERBOSE as 2 and DNNL_VERBOSE_TIMESTAMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=2
export DNNL_VERBOSE_TIMESTAMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu >>log_cpu_f32_vb2.csv 2>&1
./dpcpp/out/cnn-inference-f32-cpp gpu >>log_gpu_f32_vb2.csv 2>&1
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Analyze Verbose Logs*** Step 1: List out all oneDNN verbose logsusers should see two verbose logs listed in the table below.|Log File Name | Description ||:-----|:----||log_cpu_f32_vb2.csv| log for cpu run ||log_cpu_f32_vb2.csv| log for gpu run|
###Code
import os
filenames= os.listdir (".")
result = []
keyword = ".csv"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 2: Pick a verbose log by putting its index value belowUsers can pick either cpu or gpu log for analysis. Once users finish Step 2 to Step 8 for one log file, they can go back to step 2 and select another log file for analysis.
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
OPTIONAL: browse the content of selected verbose log.
###Code
logfile = result[FdIndex]
with open(logfile) as f:
log = f.read()
print(log)
###Output
_____no_output_____
###Markdown
Step 3: Parse verbose log and get the data back> Users will also get a oneDNN.json file with timeline information for oneDNN primitives.
###Code
logfile = result[FdIndex]
print(logfile)
from profiling.profile_utils import oneDNNUtils, oneDNNLog
onednn = oneDNNUtils()
log1 = oneDNNLog()
log1.load_log(logfile)
data = log1.data
exec_data = log1.exec_data
###Output
_____no_output_____
###Markdown
Step 4: Time breakdown for exec typeThe exec type includes exec and create. |exec type | Description | |:-----|:----| |exec | Time for primitives exection. Better to spend most of time on primitives execution. | |create| Time for primitives creation. Primitives creation happens once. Better to spend less time on primitive creation. |
###Code
onednn.breakdown(data,"exec","time")
###Output
_____no_output_____
###Markdown
Step 5: Time breakdown for primitives typeThe primitives type includes convolution, reorder, sum, etc. For this simple convolution net example, convolution and inner product primitives are expected to spend most of time. However, the exact time percentage of different primitivies may vary among different architectures. Users can easily identify top hotpots of primitives executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"type","time")
###Output
_____no_output_____
###Markdown
Step 6: Time breakdown for JIT kernel type oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and instruction set supported by the system. Therefore, users can see different JIT kernel type among different CPU and GPU architectures. For example, users can see avx_core_vnni JIT kernel if the workload uses VNNI instruction on Cascake Lake platform. Users can also see different OCL kernels among different Intel GPU generations. Moreover, users can identify the top hotspots of JIT kernel executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"jit","time")
###Output
_____no_output_____
###Markdown
Step 7: Time breakdown for algorithm typeoneDNN also supports different algorithms. Users can identify the top hotspots of algorthm executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"alg","time")
###Output
_____no_output_____
###Markdown
Step 8: Time breakdown for architecture typeThe supported architectures include CPU and GPU. For this simple net sample, we don't split computation among CPU and GPU, so users should see either 100% CPU time or 100% GPU time.
###Code
onednn.breakdown(data,"arch","time")
###Output
_____no_output_____
###Markdown
Step 9: Timeline tracingWith generated oneDNN.json file, users could see the oneDNN primitives over timeline via chrome://tracing UI.Please load the oneDNN.json file on chrome://tracing UI, and users could see the timeline information such as the below diagram.> Note : To use chrome://tracing UI, users only need to type "chrome://tracing" in a brower with chrome tracing support. (Google Chrome and Microsoft Edge) *** Inspecting JIT CodeIn this section, we dump JIT code on the built sample from the previous section, and users can see different results from CPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_inspecting_jit.html) for detailed JIT Dump information When the feature is enabled at build-time, you can use the DNNL_JIT_DUMP environment variable to inspect JIT code.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_JIT_DUMP | 0 |JIT dump is disabled (default)|||any other value|JIT dump is enabled| Step 1: Prepare run.sh and enable DNNL_JIT_DUMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# disable verbose log
export DNNL_VERBOSE=0
# enable JIT Dump
export DNNL_JIT_DUMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Step 2: Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Step 3: Move all JIT Dump files into the jitdump folder
###Code
!mkdir jitdump;mv *.bin jitdump
###Output
_____no_output_____
###Markdown
Step 4: List out all oneDNN JIT Dump files
###Code
import os
filenames= os.listdir ("jitdump")
result = []
keyword = ".bin"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 5: Pick a JIT Dump file by putting its index value below
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
Step 6: export JIT Dump file to environment variable JITFILE
###Code
if len(result):
logfile = result[FdIndex]
os.environ["JITFILE"] = logfile
###Output
_____no_output_____
###Markdown
Step 7: disassembler JIT Dump file to view the code> NOTE: If the oneDNN sample uses VNNI instruction, users should be able to see "vpdpbusd" instruction from the JIT Dump file > NOTE: If the oneDNN sample uses BF16 instruction, users should see usage of vdpbf16ps or vcvtne2ps2bf16 in the JIT dump file. > NOTE: For disassembler vdpbf16ps and vcvtne2ps2bf16 instructions, users must use objdump with v2.34 or above.
###Code
!objdump -D -b binary -mi386:x86-64 jitdump/$JITFILE
###Output
_____no_output_____
###Markdown
Profile Intel® oneAPI Deep Neural Network Library (oneDNN) Samples by using Verobse Mode and JIT DUMP inspection Learning ObjectivesIn this module the developer will:* Learn how to use Verbose Mode to profile oneDNN samples on CPU & GPU* Learn how to inspect JIT Dump to profile oneDNN samples on CPU This module shows the elapsed time percentage over different oneDNN primitives This module also shows the elapsed time percentage over different oneDNN JIT or GPU kernels *** Verbose Mode Exercise prerequisites*** Step 1: Prepare the build/run environmentoneDNN has four different configurations inside the Intel oneAPI toolkits. Each configuration is in a different folder under the oneDNN installation path, and each configuration supports a different compiler or threading library Set the installation path of your oneAPI toolkit
###Code
# default path: /opt/intel/oneapi
%env ONEAPI_INSTALL=/opt/intel/oneapi
import os
if os.path.isdir(os.environ['ONEAPI_INSTALL']) == False:
print("ERROR! wrong oneAPI installation path")
!printf '%s\n' $ONEAPI_INSTALL/dnnl/latest/cpu_*
###Output
_____no_output_____
###Markdown
As you can see, there are four different folders under the oneDNN installation path, and each of those configurations supports different features. This tutorial will use the dpcpp configuration to showcase the verbose log for both CPU and GPU. Create a lab folder for this exercise.
###Code
!rm -rf lab;mkdir -p lab
###Output
_____no_output_____
###Markdown
Install required python packages.
###Code
!pip3 install -r requirements.txt
###Output
_____no_output_____
###Markdown
Get current platform information for this exercise.
###Code
from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info()
###Output
_____no_output_____
###Markdown
Step 2: Preparing the samples codeThis exercise uses the cnn_inference_f32.cpp example from oneDNN installation path.The section below will copy the cnn_inference_f32.cpp file into the lab folder. This section also copies the required header files and CMake file into the lab folder.
###Code
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/cnn_inference_f32.cpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.hpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.h lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/CMakeLists.txt lab/
###Output
_____no_output_____
###Markdown
Step 3: Build and Run with the oneAPI DPC++ Compiler One of the oneDNN configurations supports the oneAPI DPC++ compiler, and it can run on different architectures by using DPC++.The following section shows you how to build with DPC++ and run on different architectures. Script - build.shThe script **build.sh** encapsulates the compiler **dpcpp** command and flags that will generate the exectuable.To enable use of the DPC++ compiler and the related SYCL runtime, some definitions must be passed as cmake arguments.Here are the related cmake arguments for the DPC++ configuration: -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
###Code
%%writefile build.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force> /dev/null 2>&1
export EXAMPLE_ROOT=./lab/
mkdir dpcpp
cd dpcpp
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
make cnn-inference-f32-cpp
###Output
_____no_output_____
###Markdown
Once you achieve an all-clear from your compilation, you execute your program on the DevCloud or a local machine. Script - run.shThe script **run.sh** encapsulates the program for submission to the job queue for execution.By default, the built program uses CPU as the execution engine, but the user can switch to GPU by specifying the input argument "gpu".The user can refer to run.sh below to run cnn-inference-f32-cpp on both CPU and GPU.
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=0
./dpcpp/out/cnn-inference-f32-cpp cpu
./dpcpp/out/cnn-inference-f32-cpp gpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in both build.sh and run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('build.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! rm -rf dpcpp;chmod 755 q; chmod 755 build.sh; chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q build.sh; ./q run.sh; else ./build.sh; ./run.sh; fi
###Output
_____no_output_____
###Markdown
Enable Verbose Mode***In this section, we enable verbose mode on the built sample from the previous section, and users can see different results from CPU and GPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) for detailed verbose mode information When the feature is enabled at build-time, you can use the DNNL_VERBOSE environment variable to turn verbose mode on and control the level of verbosity.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE| 0 |no verbose output (default)|||1|primitive information at execution|||2|primitive information at creation and execution| prepare run.sh and enable DNNL_VERBOSE as 2
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=2
./dpcpp/out/cnn-inference-f32-cpp cpu >>log_cpu_f32_vb2.csv 2>&1
./dpcpp/out/cnn-inference-f32-cpp gpu >>log_gpu_f32_vb2.csv 2>&1
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Analyze Verbose Logs*** Step 1: List out all oneDNN verbose logsusers should see two verbose logs listed in the table below.|Log File Name | Description ||:-----|:----||log_cpu_f32_vb2.csv| log for cpu run ||log_cpu_f32_vb2.csv| log for gpu run|
###Code
import os
filenames= os.listdir (".")
result = []
keyword = ".csv"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 2: Pick a verbose log by putting its index value belowUsers can pick either cpu or gpu log for analysis. Once users finish Step 2 to Step 8 for one log file, they can go back to step 2 and select another log file for analysis.
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
OPTIONAL: browse the content of selected verbose log.
###Code
logfile = result[FdIndex]
with open(logfile) as f:
log = f.read()
print(log)
###Output
_____no_output_____
###Markdown
Step 3: Parse verbose log and get the data back
###Code
logfile = result[FdIndex]
print(logfile)
from profiling.profile_utils import oneDNNUtils, oneDNNLog
onednn = oneDNNUtils()
log1 = oneDNNLog()
log1.load_log(logfile)
data = log1.data
exec_data = log1.exec_data
###Output
_____no_output_____
###Markdown
Step 4: Time breakdown for exec typeThe exec type includes exec and create. |exec type | Description | |:-----|:----| |exec | Time for primitives exection. Better to spend most of time on primitives execution. | |create| Time for primitives creation. Primitives creation happens once. Better to spend less time on primitive creation. |
###Code
onednn.breakdown(data,"exec","time")
###Output
_____no_output_____
###Markdown
Step 5: Time breakdown for primitives typeThe primitives type includes convolution, reorder, sum, etc. For this simple convolution net example, convolution and inner product primitives are expected to spend most of time. However, the exact time percentage of different primitivies may vary among different architectures. Users can easily identify top hotpots of primitives executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"type","time")
###Output
_____no_output_____
###Markdown
Step 6: Time breakdown for JIT kernel type oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and instruction set supported by the system. Therefore, users can see different JIT kernel type among different CPU and GPU architectures. For example, users can see avx_core_vnni JIT kernel if the workload uses VNNI instruction on Cascake Lake platform. Users can also see different OCL kernels among different Intel GPU generations. Moreover, users can identify the top hotspots of JIT kernel executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"jit","time")
###Output
_____no_output_____
###Markdown
Step 7: Time breakdown for algorithm typeoneDNN also supports different algorithms. Users can identify the top hotspots of algorthm executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"alg","time")
###Output
_____no_output_____
###Markdown
Step 8: Time breakdown for architecture typeThe supported architectures include CPU and GPU. For this simple net sample, we don't split computation among CPU and GPU, so users should see either 100% CPU time or 100% GPU time.
###Code
onednn.breakdown(data,"arch","time")
###Output
_____no_output_____
###Markdown
*** Inspecting JIT CodeIn this section, we dump JIT code on the built sample from the previous section, and users can see different results from CPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_inspecting_jit.html) for detailed JIT Dump information When the feature is enabled at build-time, you can use the DNNL_JIT_DUMP environment variable to inspect JIT code.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_JIT_DUMP | 0 |JIT dump is disabled (default)|||any other value|JIT dump is enabled| Step 1: Prepare run.sh and enable DNNL_JIT_DUMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# disable verbose log
export DNNL_VERBOSE=0
# enable JIT Dump
export DNNL_JIT_DUMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Step 2: Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Step 3: Move all JIT Dump files into the jitdump folder
###Code
!mkdir jitdump;mv *.bin jitdump
###Output
_____no_output_____
###Markdown
Step 4: List out all oneDNN JIT Dump files
###Code
import os
filenames= os.listdir ("jitdump")
result = []
keyword = ".bin"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 5: Pick a JIT Dump file by putting its index value below
###Code
FdIndex=2
###Output
_____no_output_____
###Markdown
Step 6: export JIT Dump file to environment variable JITFILE
###Code
logfile = result[FdIndex]
os.environ["JITFILE"] = logfile
###Output
_____no_output_____
###Markdown
Step 7: disassembler JIT Dump file to view the code> NOTE: If the oneDNN sample uses VNNI instruction, users should be able to see "vpdpbusd" instruction from the JIT Dump file > NOTE: If the oneDNN sample uses BF16 instruction, users should see usage of vdpbf16ps or vcvtne2ps2bf16 in the JIT dump file. > NOTE: For disassembler vdpbf16ps and vcvtne2ps2bf16 instructions, users must use objdump with v2.34 or above.
###Code
!objdump -D -b binary -mi386:x86-64 jitdump/$JITFILE
###Output
_____no_output_____
###Markdown
Profile Intel® oneAPI Deep Neural Network Library (oneDNN) Samples by using Verobse Mode and JIT DUMP inspection Learning ObjectivesIn this module the developer will:* Learn how to use Verbose Mode to profile oneDNN samples on CPU & GPU* Learn how to inspect JIT Dump to profile oneDNN samples on CPU This module shows the elapsed time percentage over different oneDNN primitives This module also shows the elapsed time percentage over different oneDNN JIT or GPU kernels *** Verbose Mode Exercise prerequisites*** Step 1: Prepare the build/run environmentoneDNN has four different configurations inside the Intel oneAPI toolkits. Each configuration is in a different folder under the oneDNN installation path, and each configuration supports a different compiler or threading library Set the installation path of your oneAPI toolkit
###Code
# default path: /opt/intel/oneapi
%env ONEAPI_INSTALL=/opt/intel/oneapi
import os
if os.path.isdir(os.environ['ONEAPI_INSTALL']) == False:
print("ERROR! wrong oneAPI installation path")
!printf '%s\n' $ONEAPI_INSTALL/dnnl/latest/cpu_*
###Output
_____no_output_____
###Markdown
As you can see, there are four different folders under the oneDNN installation path, and each of those configurations supports different features. This tutorial will use the dpcpp configuration to showcase the verbose log for both CPU and GPU. Create a lab folder for this exercise.
###Code
!mkdir -p lab
###Output
_____no_output_____
###Markdown
Install required python packages.
###Code
!pip3 install -r requirements.txt
###Output
_____no_output_____
###Markdown
Get current platform information for this exercise.
###Code
from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info()
###Output
_____no_output_____
###Markdown
Step 2: Preparing the samples codeThis exercise uses the cnn_inference_f32.cpp example from oneDNN installation path.The section below will copy the cnn_inference_f32.cpp file into the lab folder. This section also copies the required header files and CMake file into the lab folder.
###Code
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/cnn_inference_f32.cpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.hpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.h lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/CMakeLists.txt lab/
###Output
_____no_output_____
###Markdown
Step 3: Build and Run with the oneAPI DPC++ Compiler One of the oneDNN configurations supports the oneAPI DPC++ compiler, and it can run on different architectures by using DPC++.The following section shows you how to build with DPC++ and run on different architectures. Script - build.shThe script **build.sh** encapsulates the compiler **dpcpp** command and flags that will generate the exectuable.To enable use of the DPC++ compiler and the related SYCL runtime, some definitions must be passed as cmake arguments.Here are the related cmake arguments for the DPC++ configuration: -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
###Code
%%writefile build.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force> /dev/null 2>&1
export EXAMPLE_ROOT=./lab/
mkdir dpcpp
cd dpcpp
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
make cnn-inference-f32-cpp
###Output
_____no_output_____
###Markdown
Once you achieve an all-clear from your compilation, you execute your program on the DevCloud or a local machine. Script - run.shThe script **run.sh** encapsulates the program for submission to the job queue for execution.By default, the built program uses CPU as the execution engine, but the user can switch to GPU by specifying the input argument "gpu".The user can refer to run.sh below to run cnn-inference-f32-cpp on both CPU and GPU.
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=0
./dpcpp/out/cnn-inference-f32-cpp cpu
./dpcpp/out/cnn-inference-f32-cpp gpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! rm -rf dpcpp;chmod 755 q; chmod 755 build.sh; chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q build.sh; ./q run.sh; else ./build.sh; ./run.sh; fi
###Output
_____no_output_____
###Markdown
Enable Verbose Mode***In this section, we enable verbose mode on the built sample from the previous section, and users can see different results from CPU and GPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) for detailed verbose mode information When the feature is enabled at build-time, you can use the DNNL_VERBOSE environment variable to turn verbose mode on and control the level of verbosity.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE| 0 |no verbose output (default)|||1|primitive information at execution|||2|primitive information at creation and execution| prepare run.sh and enable DNNL_VERBOSE as 2
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=2
./dpcpp/out/cnn-inference-f32-cpp cpu >>log_cpu_f32_vb2.csv 2>&1
./dpcpp/out/cnn-inference-f32-cpp gpu >>log_gpu_f32_vb2.csv 2>&1
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Analyze Verbose Logs*** Step 1: List out all oneDNN verbose logsusers should see two verbose logs listed in the table below.|Log File Name | Description ||:-----|:----||log_cpu_f32_vb2.csv| log for cpu run ||log_cpu_f32_vb2.csv| log for gpu run|
###Code
import os
filenames= os.listdir (".")
result = []
keyword = ".csv"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 2: Pick a verbose log by putting its index value belowUsers can pick either cpu or gpu log for analysis. Once users finish Step 2 to Step 8 for one log file, they can go back to step 2 and select another log file for analysis.
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
OPTIONAL: browse the content of selected verbose log.
###Code
logfile = result[FdIndex]
with open(logfile) as f:
log = f.read()
print(log)
###Output
_____no_output_____
###Markdown
Step 3: Parse verbose log and get the data back
###Code
logfile = result[FdIndex]
print(logfile)
from profiling.profile_utils import oneDNNUtils, oneDNNLog
onednn = oneDNNUtils()
log1 = oneDNNLog()
log1.load_log(logfile)
data = log1.data
exec_data = log1.exec_data
###Output
_____no_output_____
###Markdown
Step 4: Time breakdown for exec typeThe exec type includes exec and create. |exec type | Description | |:-----|:----| |exec | Time for primitives exection. Better to spend most of time on primitives execution. | |create| Time for primitives creation. Primitives creation happens once. Better to spend less time on primitive creation. |
###Code
onednn.breakdown(data,"exec","time")
###Output
_____no_output_____
###Markdown
Step 5: Time breakdown for primitives typeThe primitives type includes convolution, reorder, sum, etc. For this simple convolution net example, convolution and inner product primitives are expected to spend most of time. However, the exact time percentage of different primitivies may vary among different architectures. Users can easily identify top hotpots of primitives executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"type","time")
###Output
_____no_output_____
###Markdown
Step 6: Time breakdown for JIT kernel type oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and instruction set supported by the system. Therefore, users can see different JIT kernel type among different CPU and GPU architectures. For example, users can see avx_core_vnni JIT kernel if the workload uses VNNI instruction on Cascake Lake platform. Users can also see different OCL kernels among different Intel GPU generations. Moreover, users can identify the top hotspots of JIT kernel executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"jit","time")
###Output
_____no_output_____
###Markdown
Step 7: Time breakdown for algorithm typeoneDNN also supports different algorithms. Users can identify the top hotspots of algorthm executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"alg","time")
###Output
_____no_output_____
###Markdown
Step 8: Time breakdown for architecture typeThe supported architectures include CPU and GPU. For this simple net sample, we don't split computation among CPU and GPU, so users should see either 100% CPU time or 100% GPU time.
###Code
onednn.breakdown(data,"arch","time")
###Output
_____no_output_____
###Markdown
*** Inspecting JIT CodeIn this section, we dump JIT code on the built sample from the previous section, and users can see different results from CPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_inspecting_jit.html) for detailed JIT Dump information When the feature is enabled at build-time, you can use the DNNL_JIT_DUMP environment variable to inspect JIT code.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_JIT_DUMP | 0 |JIT dump is disabled (default)|||any other value|JIT dump is enabled| Step 1: Prepare run.sh and enable DNNL_JIT_DUMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# disable verbose log
export DNNL_VERBOSE=0
# enable JIT Dump
export DNNL_JIT_DUMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
Step 2: Submitting ***run.sh** to the job queueNow we can submit **run.sh** to the job queue.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Step 3: Move all JIT Dump files into the jitdump folder
###Code
!mkdir jitdump;mv *.bin jitdump
###Output
_____no_output_____
###Markdown
Step 4: List out all oneDNN JIT Dump files
###Code
import os
filenames= os.listdir ("jitdump")
result = []
keyword = ".bin"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 5: Pick a JIT Dump file by putting its index value below
###Code
FdIndex=2
###Output
_____no_output_____
###Markdown
Step 6: export JIT Dump file to environment variable JITFILE
###Code
logfile = result[FdIndex]
os.environ["JITFILE"] = logfile
###Output
_____no_output_____
###Markdown
Step 7: disassembler JIT Dump file to view the code> NOTE: If the oneDNN sample uses VNNI instruction, users should be able to see "vpdpbusd" instruction from the JIT Dump file > NOTE: If the oneDNN sample uses BF16 instruction, users should see usage of vdpbf16ps or vcvtne2ps2bf16 in the JIT dump file. > NOTE: For disassembler vdpbf16ps and vcvtne2ps2bf16 instructions, users must use objdump with v2.34 or above.
###Code
!objdump -D -b binary -mi386:x86-64 jitdump/$JITFILE
###Output
_____no_output_____
###Markdown
Profile Intel® oneAPI Deep Neural Network Library (oneDNN) Samples by using Verobse Mode and JIT DUMP inspection Learning ObjectivesIn this module the developer will:* Learn how to use Verbose Mode to profile oneDNN samples on CPU & GPU* Learn how to inspect JIT Dump to profile oneDNN samples on CPU This module shows the elapsed time percentage over different oneDNN primitives This module also shows the elapsed time percentage over different oneDNN JIT or GPU kernels *** Verbose Mode Exercise prerequisites*** Step 1: Prepare the build/run environmentoneDNN has four different configurations inside the Intel oneAPI toolkits. Each configuration is in a different folder under the oneDNN installation path, and each configuration supports a different compiler or threading library Set the installation path of your oneAPI toolkit
###Code
# ignore all warning messages
import warnings
warnings.filterwarnings('ignore')
%autosave 0
# default path: /opt/intel/oneapi
%env ONEAPI_INSTALL=/opt/intel/oneapi
import os
if os.path.isdir(os.environ['ONEAPI_INSTALL']) == False:
print("ERROR! wrong oneAPI installation path")
!printf '%s\n' $ONEAPI_INSTALL/dnnl/latest/cpu_*
###Output
_____no_output_____
###Markdown
As you can see, there are four different folders under the oneDNN installation path, and each of those configurations supports different features. This tutorial will use the dpcpp configuration to showcase the verbose log for both CPU and GPU. Create a lab folder for this exercise.
###Code
!rm -rf lab;mkdir -p lab
###Output
_____no_output_____
###Markdown
Install required python packages.
###Code
!pip3 install -r requirements.txt
###Output
_____no_output_____
###Markdown
Get current platform information for this exercise.
###Code
from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info()
###Output
_____no_output_____
###Markdown
Step 2: Preparing the samples codeThis exercise uses the cnn_inference_f32.cpp example from oneDNN installation path.The section below will copy the cnn_inference_f32.cpp file into the lab folder. This section also copies the required header files and CMake file into the lab folder.
###Code
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/cnn_inference_f32.cpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.hpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.h lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/CMakeLists.txt lab/
###Output
_____no_output_____
###Markdown
Step 3: Build and Run with the oneAPI DPC++ Compiler One of the oneDNN configurations supports the oneAPI DPC++ compiler, and it can run on different architectures by using DPC++.The following section shows you how to build with DPC++ and run on different architectures. Script - build.shThe script **build.sh** encapsulates the compiler **dpcpp** command and flags that will generate the exectuable.To enable use of the DPC++ compiler and the related SYCL runtime, some definitions must be passed as cmake arguments.Here are the related cmake arguments for the DPC++ configuration: -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
###Code
%%writefile build.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force> /dev/null 2>&1
export EXAMPLE_ROOT=./lab/
mkdir dpcpp
cd dpcpp
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
make cnn-inference-f32-cpp
###Output
_____no_output_____
###Markdown
Once you achieve an all-clear from your compilation, you execute your program on the DevCloud or a local machine. Script - run.shThe script **run.sh** encapsulates the program for submission to the job queue for execution.By default, the built program uses CPU as the execution engine, but the user can switch to GPU by specifying the input argument "gpu".The user can refer to run.sh below to run cnn-inference-f32-cpp on both CPU and GPU.
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=0
./dpcpp/out/cnn-inference-f32-cpp cpu
./dpcpp/out/cnn-inference-f32-cpp gpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in both build.sh and run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('build.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! rm -rf dpcpp;chmod 755 q; chmod 755 build.sh; chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q build.sh; ./q run.sh; else ./build.sh; ./run.sh; fi
###Output
_____no_output_____
###Markdown
Enable Verbose Mode***In this section, we enable verbose mode on the built sample from the previous section, and users can see different results from CPU and GPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) for detailed verbose mode information When the feature is enabled at build-time, you can use the DNNL_VERBOSE environment variable to turn verbose mode on and control the level of verbosity.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE| 0 |no verbose output (default)|||1|primitive information at execution|||2|primitive information at creation and execution| Users could also enable DNNL_VERBOSE with timestamps by DNNL_VERBOSE_TIMESTAMP environment variable. |Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE_TIMESTAMP| 0 |display timestamps disabled (default)|||1| display timestamps enabled| prepare run.sh and enable DNNL_VERBOSE as 2 and DNNL_VERBOSE_TIMESTAMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=2
export DNNL_VERBOSE_TIMESTAMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu >>log_cpu_f32_vb2.csv 2>&1
./dpcpp/out/cnn-inference-f32-cpp gpu >>log_gpu_f32_vb2.csv 2>&1
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Analyze Verbose Logs*** Step 1: List out all oneDNN verbose logsusers should see two verbose logs listed in the table below.|Log File Name | Description ||:-----|:----||log_cpu_f32_vb2.csv| log for cpu run ||log_cpu_f32_vb2.csv| log for gpu run|
###Code
import os
filenames= os.listdir (".")
result = []
keyword = ".csv"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 2: Pick a verbose log by putting its index value belowUsers can pick either cpu or gpu log for analysis. Once users finish Step 2 to Step 8 for one log file, they can go back to step 2 and select another log file for analysis.
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
OPTIONAL: browse the content of selected verbose log.
###Code
logfile = result[FdIndex]
with open(logfile) as f:
log = f.read()
print(log)
###Output
_____no_output_____
###Markdown
Step 3: Parse verbose log and get the data back> Users will also get a oneDNN.json file with timeline information for oneDNN primitives.
###Code
logfile = result[FdIndex]
print(logfile)
from profiling.profile_utils import oneDNNUtils, oneDNNLog
onednn = oneDNNUtils()
log1 = oneDNNLog()
log1.load_log(logfile)
data = log1.data
exec_data = log1.exec_data
###Output
_____no_output_____
###Markdown
Step 4: Time breakdown for exec typeThe exec type includes exec and create. |exec type | Description | |:-----|:----| |exec | Time for primitives exection. Better to spend most of time on primitives execution. | |create| Time for primitives creation. Primitives creation happens once. Better to spend less time on primitive creation. |
###Code
onednn.breakdown(data,"exec","time")
###Output
_____no_output_____
###Markdown
Step 5: Time breakdown for primitives typeThe primitives type includes convolution, reorder, sum, etc. For this simple convolution net example, convolution and inner product primitives are expected to spend most of time. However, the exact time percentage of different primitivies may vary among different architectures. Users can easily identify top hotpots of primitives executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"type","time")
###Output
_____no_output_____
###Markdown
Step 6: Time breakdown for JIT kernel type oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and instruction set supported by the system. Therefore, users can see different JIT kernel type among different CPU and GPU architectures. For example, users can see avx_core_vnni JIT kernel if the workload uses VNNI instruction on Cascake Lake platform. Users can also see different OCL kernels among different Intel GPU generations. Moreover, users can identify the top hotspots of JIT kernel executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"jit","time")
###Output
_____no_output_____
###Markdown
Step 7: Time breakdown for algorithm typeoneDNN also supports different algorithms. Users can identify the top hotspots of algorthm executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"alg","time")
###Output
_____no_output_____
###Markdown
Step 8: Time breakdown for architecture typeThe supported architectures include CPU and GPU. For this simple net sample, we don't split computation among CPU and GPU, so users should see either 100% CPU time or 100% GPU time.
###Code
onednn.breakdown(data,"arch","time")
###Output
_____no_output_____
###Markdown
Step 9: Timeline tracingWith generated oneDNN.json file, users could see the oneDNN primitives over timeline via chrome://tracing UI.Please load the oneDNN.json file on chrome://tracing UI, and users could see the timeline information such as the below diagram.> Note : To use chrome://tracing UI, users only need to type "chrome://tracing" in a brower with chrome tracing support. (Google Chrome and Microsoft Edge) *** Inspecting JIT CodeIn this section, we dump JIT code on the built sample from the previous section, and users can see different results from CPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_inspecting_jit.html) for detailed JIT Dump information When the feature is enabled at build-time, you can use the DNNL_JIT_DUMP environment variable to inspect JIT code.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_JIT_DUMP | 0 |JIT dump is disabled (default)|||any other value|JIT dump is enabled| Step 1: Prepare run.sh and enable DNNL_JIT_DUMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# disable verbose log
export DNNL_VERBOSE=0
# enable JIT Dump
export DNNL_JIT_DUMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Step 2: Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Step 3: Move all JIT Dump files into the jitdump folder
###Code
!mkdir jitdump;mv *.bin jitdump
###Output
_____no_output_____
###Markdown
Step 4: List out all oneDNN JIT Dump files
###Code
import os
filenames= os.listdir ("jitdump")
result = []
keyword = ".bin"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 5: Pick a JIT Dump file by putting its index value below
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
Step 6: export JIT Dump file to environment variable JITFILE
###Code
logfile = result[FdIndex]
os.environ["JITFILE"] = logfile
###Output
_____no_output_____
###Markdown
Step 7: disassembler JIT Dump file to view the code> NOTE: If the oneDNN sample uses VNNI instruction, users should be able to see "vpdpbusd" instruction from the JIT Dump file > NOTE: If the oneDNN sample uses BF16 instruction, users should see usage of vdpbf16ps or vcvtne2ps2bf16 in the JIT dump file. > NOTE: For disassembler vdpbf16ps and vcvtne2ps2bf16 instructions, users must use objdump with v2.34 or above.
###Code
!objdump -D -b binary -mi386:x86-64 jitdump/$JITFILE
###Output
_____no_output_____
###Markdown
Profile Intel® oneAPI Deep Neural Network Library (oneDNN) Samples by using Verobse Mode and JIT DUMP inspection Learning ObjectivesIn this module the developer will:* Learn how to use Verbose Mode to profile oneDNN samples on CPU & GPU* Learn how to inspect JIT Dump to profile oneDNN samples on CPU This module shows the elapsed time percentage over different oneDNN primitives This module also shows the elapsed time percentage over different oneDNN JIT or GPU kernels *** Verbose Mode Exercise prerequisites*** Step 1: Prepare the build/run environmentoneDNN has four different configurations inside the Intel oneAPI toolkits. Each configuration is in a different folder under the oneDNN installation path, and each configuration supports a different compiler or threading library Set the installation path of your oneAPI toolkit
###Code
# ignore all warning messages
import warnings
warnings.filterwarnings('ignore')
# default path: /opt/intel/oneapi
%env ONEAPI_INSTALL=/opt/intel/oneapi
import os
if os.path.isdir(os.environ['ONEAPI_INSTALL']) == False:
print("ERROR! wrong oneAPI installation path")
!printf '%s\n' $ONEAPI_INSTALL/dnnl/latest/cpu_*
###Output
_____no_output_____
###Markdown
As you can see, there are four different folders under the oneDNN installation path, and each of those configurations supports different features. This tutorial will use the dpcpp configuration to showcase the verbose log for both CPU and GPU. Create a lab folder for this exercise.
###Code
!rm -rf lab;mkdir -p lab
###Output
_____no_output_____
###Markdown
Install required python packages.
###Code
!pip3 install -r requirements.txt
###Output
_____no_output_____
###Markdown
Get current platform information for this exercise.
###Code
from profiling.profile_utils import PlatformUtils
plat_utils = PlatformUtils()
plat_utils.dump_platform_info()
###Output
_____no_output_____
###Markdown
Step 2: Preparing the samples codeThis exercise uses the cnn_inference_f32.cpp example from oneDNN installation path.The section below will copy the cnn_inference_f32.cpp file into the lab folder. This section also copies the required header files and CMake file into the lab folder.
###Code
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/cnn_inference_f32.cpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.hpp lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/example_utils.h lab/
!cp $ONEAPI_INSTALL/dnnl/latest/cpu_dpcpp_gpu_dpcpp/examples/CMakeLists.txt lab/
###Output
_____no_output_____
###Markdown
Step 3: Build and Run with the oneAPI DPC++ Compiler One of the oneDNN configurations supports the oneAPI DPC++ compiler, and it can run on different architectures by using DPC++.The following section shows you how to build with DPC++ and run on different architectures. Script - build.shThe script **build.sh** encapsulates the compiler **dpcpp** command and flags that will generate the exectuable.To enable use of the DPC++ compiler and the related SYCL runtime, some definitions must be passed as cmake arguments.Here are the related cmake arguments for the DPC++ configuration: -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
###Code
%%writefile build.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force> /dev/null 2>&1
export EXAMPLE_ROOT=./lab/
mkdir dpcpp
cd dpcpp
cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp -DDNNL_CPU_RUNTIME=SYCL -DDNNL_GPU_RUNTIME=SYCL
make cnn-inference-f32-cpp
###Output
_____no_output_____
###Markdown
Once you achieve an all-clear from your compilation, you execute your program on the DevCloud or a local machine. Script - run.shThe script **run.sh** encapsulates the program for submission to the job queue for execution.By default, the built program uses CPU as the execution engine, but the user can switch to GPU by specifying the input argument "gpu".The user can refer to run.sh below to run cnn-inference-f32-cpp on both CPU and GPU.
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=0
./dpcpp/out/cnn-inference-f32-cpp cpu
./dpcpp/out/cnn-inference-f32-cpp gpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in both build.sh and run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('build.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **build.sh** and **run.sh** to the job queueNow we can submit **build.sh** and **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! rm -rf dpcpp;chmod 755 q; chmod 755 build.sh; chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q build.sh; ./q run.sh; else ./build.sh; ./run.sh; fi
###Output
_____no_output_____
###Markdown
Enable Verbose Mode***In this section, we enable verbose mode on the built sample from the previous section, and users can see different results from CPU and GPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) for detailed verbose mode information When the feature is enabled at build-time, you can use the DNNL_VERBOSE environment variable to turn verbose mode on and control the level of verbosity.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_VERBOSE| 0 |no verbose output (default)|||1|primitive information at execution|||2|primitive information at creation and execution| prepare run.sh and enable DNNL_VERBOSE as 2
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# enable verbose log
export DNNL_VERBOSE=2
./dpcpp/out/cnn-inference-f32-cpp cpu >>log_cpu_f32_vb2.csv 2>&1
./dpcpp/out/cnn-inference-f32-cpp gpu >>log_gpu_f32_vb2.csv 2>&1
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue. NOTE - it is possible to execute any of the build and run commands in local environments.To enable users to run their scripts either on the Intel DevCloud or in local environments, this and subsequent training checks for the existence of the job submission command **qsub**. If the check fails, it is assumed that build/run will be local.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Analyze Verbose Logs*** Step 1: List out all oneDNN verbose logsusers should see two verbose logs listed in the table below.|Log File Name | Description ||:-----|:----||log_cpu_f32_vb2.csv| log for cpu run ||log_cpu_f32_vb2.csv| log for gpu run|
###Code
import os
filenames= os.listdir (".")
result = []
keyword = ".csv"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 2: Pick a verbose log by putting its index value belowUsers can pick either cpu or gpu log for analysis. Once users finish Step 2 to Step 8 for one log file, they can go back to step 2 and select another log file for analysis.
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
OPTIONAL: browse the content of selected verbose log.
###Code
logfile = result[FdIndex]
with open(logfile) as f:
log = f.read()
print(log)
###Output
_____no_output_____
###Markdown
Step 3: Parse verbose log and get the data back
###Code
logfile = result[FdIndex]
print(logfile)
from profiling.profile_utils import oneDNNUtils, oneDNNLog
onednn = oneDNNUtils()
log1 = oneDNNLog()
log1.load_log(logfile)
data = log1.data
exec_data = log1.exec_data
###Output
_____no_output_____
###Markdown
Step 4: Time breakdown for exec typeThe exec type includes exec and create. |exec type | Description | |:-----|:----| |exec | Time for primitives exection. Better to spend most of time on primitives execution. | |create| Time for primitives creation. Primitives creation happens once. Better to spend less time on primitive creation. |
###Code
onednn.breakdown(data,"exec","time")
###Output
_____no_output_____
###Markdown
Step 5: Time breakdown for primitives typeThe primitives type includes convolution, reorder, sum, etc. For this simple convolution net example, convolution and inner product primitives are expected to spend most of time. However, the exact time percentage of different primitivies may vary among different architectures. Users can easily identify top hotpots of primitives executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"type","time")
###Output
_____no_output_____
###Markdown
Step 6: Time breakdown for JIT kernel type oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and instruction set supported by the system. Therefore, users can see different JIT kernel type among different CPU and GPU architectures. For example, users can see avx_core_vnni JIT kernel if the workload uses VNNI instruction on Cascake Lake platform. Users can also see different OCL kernels among different Intel GPU generations. Moreover, users can identify the top hotspots of JIT kernel executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"jit","time")
###Output
_____no_output_____
###Markdown
Step 7: Time breakdown for algorithm typeoneDNN also supports different algorithms. Users can identify the top hotspots of algorthm executions with this time breakdown.
###Code
onednn.breakdown(exec_data,"alg","time")
###Output
_____no_output_____
###Markdown
Step 8: Time breakdown for architecture typeThe supported architectures include CPU and GPU. For this simple net sample, we don't split computation among CPU and GPU, so users should see either 100% CPU time or 100% GPU time.
###Code
onednn.breakdown(data,"arch","time")
###Output
_____no_output_____
###Markdown
*** Inspecting JIT CodeIn this section, we dump JIT code on the built sample from the previous section, and users can see different results from CPU. Refer to the [link](https://oneapi-src.github.io/oneDNN/dev_guide_inspecting_jit.html) for detailed JIT Dump information When the feature is enabled at build-time, you can use the DNNL_JIT_DUMP environment variable to inspect JIT code.|Environment variable|Value|Description||:-----|:----|:-----||DNNL_JIT_DUMP | 0 |JIT dump is disabled (default)|||any other value|JIT dump is enabled| Step 1: Prepare run.sh and enable DNNL_JIT_DUMP as 1
###Code
%%writefile run.sh
#!/bin/bash
source $ONEAPI_INSTALL/setvars.sh --force > /dev/null 2>&1
echo "########## Executing the run"
# disable verbose log
export DNNL_VERBOSE=0
# enable JIT Dump
export DNNL_JIT_DUMP=1
./dpcpp/out/cnn-inference-f32-cpp cpu
echo "########## Done with the run"
###Output
_____no_output_____
###Markdown
OPTIONAL : replace $ONEAPI_INSTALL with set value in run.sh> NOTE : this step is mandatory if you run the notebook on DevCloud
###Code
from profiling.profile_utils import FileUtils
file_utils = FileUtils()
file_utils.replace_string_in_file('run.sh','$ONEAPI_INSTALL', os.environ['ONEAPI_INSTALL'] )
###Output
_____no_output_____
###Markdown
Step 2: Submitting **run.sh** to the job queueNow we can submit **run.sh** to the job queue.
###Code
! chmod 755 run.sh;if [ -x "$(command -v qsub)" ]; then ./q run.sh; else ./run.sh; fi
###Output
_____no_output_____
###Markdown
Step 3: Move all JIT Dump files into the jitdump folder
###Code
!mkdir jitdump;mv *.bin jitdump
###Output
_____no_output_____
###Markdown
Step 4: List out all oneDNN JIT Dump files
###Code
import os
filenames= os.listdir ("jitdump")
result = []
keyword = ".bin"
for filename in filenames:
#if os.path.isdir(os.path.join(os.path.abspath("."), filename)):
if filename.find(keyword) != -1:
result.append(filename)
result.sort()
index =0
for folder in result:
print(" %d : %s " %(index, folder))
index+=1
###Output
_____no_output_____
###Markdown
Step 5: Pick a JIT Dump file by putting its index value below
###Code
FdIndex=0
###Output
_____no_output_____
###Markdown
Step 6: export JIT Dump file to environment variable JITFILE
###Code
logfile = result[FdIndex]
os.environ["JITFILE"] = logfile
###Output
_____no_output_____
###Markdown
Step 7: disassembler JIT Dump file to view the code> NOTE: If the oneDNN sample uses VNNI instruction, users should be able to see "vpdpbusd" instruction from the JIT Dump file > NOTE: If the oneDNN sample uses BF16 instruction, users should see usage of vdpbf16ps or vcvtne2ps2bf16 in the JIT dump file. > NOTE: For disassembler vdpbf16ps and vcvtne2ps2bf16 instructions, users must use objdump with v2.34 or above.
###Code
!objdump -D -b binary -mi386:x86-64 jitdump/$JITFILE
###Output
_____no_output_____ |
lessons/Test_and_function_programming_in_Python/py_unit_test_intro.ipynb | ###Markdown
Agenda* Testing Your Code* Writing Your First Test* Executing Your First Test* More Advanced Testing Scenarios* Testing in Multiple Environments* Automating the Execution of Your Tests* What’s Next ([article source](https://realpython.com/python-testing/)) **There are many ways to test your code. In this tutorial, you’ll learn the techniques from the most basic steps and work towards advanced methods.** Testing Your Code* Automated vs. Manual Testing* Unit Tests vs. Integration Tests* Choosing a Test Runner * unittest * nose2 * pytestThis tutorial is for anyone who has written a fantastic application in Python but hasn’t yet written any tests.Testing in Python is a huge topic and can come with a lot of complexity, but it doesn’t need to be hard. You can get started creating simple tests for your application in a few easy steps and then build on it from there.In this tutorial, you’ll learn how to create a basic test, execute it, and find the bugs before your users do! You’ll learn about the tools available to write and execute tests, check your application’s performance, and even look for security issues. Automated vs. Manual TestingThe good news is, you’ve probably already created a test without realizing it. Remember when you ran your application and used it for the first time? Did you check the features and experiment using them? That’s known as exploratory testing and is a form of manual testing.Exploratory testing is a form of testing that is done without a plan. In an exploratory test, you’re just exploring the application.To have a complete set of manual tests, all you need to do is make a list of all the features your application has, the different types of input it can accept, and the expected results. Now, every time you make a change to your code, you need to go through every single item on that list and check it.That doesn’t sound like much fun, does it?This is where automated testing comes in. Automated testing is the execution of your test plan (the parts of your application you want to test, the order in which you want to test them, and the expected responses) by a script instead of a human. Python already comes with a set of tools and libraries to help you create automated tests for your application. We’ll explore those tools and libraries in this tutorial. Unit Tests vs. Integration TestsThe world of testing has no shortage of terminology, and now that you know the difference between automated and manual testing, it’s time to go a level deeper.Think of how you might test the lights on a car. You would turn on the lights (known as the test step) and go outside the car or ask a friend to check that the lights are on (known as the test assertion). Testing multiple components is known as integration testing.Think of all the things that need to work correctly in order for a simple task to give the right result. These components are like the parts to your application, all of those classes, functions, and modules you’ve written.A major challenge with integration testing is when an integration test doesn’t give the right result. It’s very hard to diagnose the issue without being able to isolate which part of the system is failing. If the lights didn’t turn on, then maybe the bulbs are broken. Is the battery dead? What about the alternator? Is the car’s computer failing?If you have a fancy modern car, it will tell you when your light bulbs have gone. It does this using a form of unit test.A unit test is a smaller test, one that checks that a single component operates in the right way. A unit test helps you to isolate what is broken in your application and fix it faster. You have just seen two types of tests:1. An integration test checks that components in your application operate with each other.2. A unit test checks a small component in your application.You can write both integration tests and unit tests in Python. To write a unit test for the built-in function sum(), you would check the output of sum() against a known output.For example, here’s how you check that the sum() of the numbers (1, 2, 3) equals 6:
###Code
assert sum([1, 2, 3]) == 6, "Should be 6"
###Output
_____no_output_____
###Markdown
This will not output anything on the REPL because the values are correct.If the result from sum() is incorrect, this will fail with an AssertionError and the message "Should be 6". Try an assertion statement again with the wrong values to see an AssertionError:```python>>> assert sum([1, 1, 1]) == 6, "Should be 6"Traceback (most recent call last): File "", line 1, in AssertionError: Should be 6```Instead of testing on the REPL, you’ll want to put this into a new Python file called test_sum.py and execute it again:```pythondef test_sum(): assert sum([1, 2, 3]) == 6, "Should be 6"if __name__ == "__main__": test_sum() print("Everything passed")```Now you have written a test case, an assertion, and an entry point (the command line). You can now execute this at the command line:
###Code
%run -i 'test_sum.py'
###Output
Everything passed
###Markdown
In Python, sum() accepts any iterable as its first argument. You tested with a list. Now test with a tuple as well. Create a new file called test_sum_2.py with the following code:```pythondef test_sum(): assert sum([1, 2, 3]) == 6, "Should be 6"def test_sum_tuple(): assert sum((1, 2, 2)) == 6, "Should be 6"if __name__ == "__main__": test_sum() test_sum_tuple() print("Everything passed")```When you execute test_sum_2.py, the script will give an error because the sum() of (1, 2, 2) is 5, not 6. The result of the script gives you the error message, the line of code, and the traceback:```python$ python test_sum_2.pyTraceback (most recent call last): File "test_sum_2.py", line 9, in test_sum_tuple() File "test_sum_2.py", line 5, in test_sum_tuple assert sum((1, 2, 2)) == 6, "Should be 6"AssertionError: Should be 6```Here you can see how a mistake in your code gives an error on the console with some information on where the error was and what the expected result was.Writing tests in this way is okay for a simple check, but what if more than one fails? This is where test runners come in. The test runner is a special application designed for running tests, checking the output, and giving you tools for debugging and diagnosing tests and applications. Choosing a Test RunnerThere are many test runners available for Python. The one built into the Python standard library is called unittest. In this tutorial, you will be using unittest test cases and the unittest test runner. The principles of unittest are easily portable to other frameworks. The three most popular test runners are:* unittest* nose or nose2* pytestChoosing the best test runner for your requirements and level of experience is important. unittestunittest has been built into the Python standard library since version 2.1. You’ll probably see it in commercial Python applications and open-source projects.unittest contains both a testing framework and a test runner. unittest has some important requirements for writing and executing tests. unittest requires that:* You put your tests into classes as methods* You use a series of special assertion methods in the unittest.TestCase class instead of the built-in assert statementTo convert the earlier example to a unittest test case, you would have to:1. Import unittest from the standard library2. Create a class called TestSum that inherits from the TestCase class3. Convert the test functions into methods by adding self as the first argument4. Change the assertions to use the self.assertEqual() method on the TestCase class5. Change the command-line entry point to call unittest.main()Follow those steps by creating a new file test_sum_unittest.py with the following code:```pythonimport unittestclass TestSum(unittest.TestCase): def test_sum(self): self.assertEqual(sum([1, 2, 3]), 6, "Should be 6") def test_sum_tuple(self): self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")if __name__ == '__main__': unittest.main()```If you execute this at the command line, you’ll see one success (indicated with .) and one failure (indicated with F):```console python test_sum_unittest.py.F======================================================================FAIL: test_sum_tuple (__main__.TestSum)----------------------------------------------------------------------Traceback (most recent call last): File "test_sum_unittest.py", line 10, in test_sum_tuple self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")AssertionError: 5 != 6 : Should be 6----------------------------------------------------------------------Ran 2 tests in 0.000sFAILED (failures=1)```You have just executed two tests using the unittest test runner. For more information on unittest, you can explore the unittest Documentation. noseYou may find that over time, as you write hundreds or even thousands of tests for your application, it becomes increasingly hard to understand and use the output from unittest.nose is compatible with any tests written using the unittest framework and can be used as a drop-in replacement for the unittest test runner. The development of nose as an open-source application fell behind, and a fork called nose2 was created. If you’re starting from scratch, it is recommended that you use nose2 instead of nose.To get started with nose2, install nose2 from PyPI and execute it on the command line. nose2 will try to discover all test scripts named test*.py and test cases inheriting from unittest.TestCase in your current directory:```python$ pip install nose2$ python -m nose2.F======================================================================FAIL: test_sum_tuple (__main__.TestSum)----------------------------------------------------------------------Traceback (most recent call last): File "test_sum_unittest.py", line 9, in test_sum_tuple self.assertEqual(sum((1, 2, 2)), 6, "Should be 6")AssertionError: Should be 6----------------------------------------------------------------------Ran 2 tests in 0.001sFAILED (failures=1)```You have just executed the test you created in test_sum_unittest.py from the nose2 test runner. nose2 offers many command-line flags for filtering the tests that you execute. For more information, you can explore the Nose 2 documentation. pytestpytest supports execution of unittest test cases. The real advantage of pytest comes by writing pytest test cases. pytest test cases are a series of functions in a Python file starting with the name test_.pytest has some other great features:* Support for the built-in assert statement instead of using special self.assert*() methods* Support for filtering for test cases* Ability to rerun from the last failing test* An ecosystem of hundreds of plugins to extend the functionalityWriting the TestSum test case example for pytest would look like this:```pythondef test_sum(): assert sum([1, 2, 3]) == 6, "Should be 6"def test_sum_tuple(): assert sum((1, 2, 2)) == 6, "Should be 6"```You have dropped the TestCase, any use of classes, and the command-line entry point. More information can be found at the Pytest Documentation Website. Writing Your First Test* Where to Write the Test* How to Structure a Simple Test* How to Write Assertions* Side EffectsLet’s bring together what you’ve learned so far and, instead of testing the built-in sum() function, test a simple implementation of the same requirement.Create a new project folder and, inside that, create a new folder called my_sum. Inside my_sum, create an empty file called \_\_init__.py. Creating the \_\_init__.py file means that themy_sum folder can be imported as a module from the parent directory.Your project folder should look like this:```console tree project/project/`-- my_sum `-- __init__.py1 directory, 1 file```Open up my_sum/__init__.py and create a new function called sum(), which takes an iterable (a list, tuple, or set) and adds the values together:```pythondef sum(arg): total = 0 for val in arg: total += val return total```This code example creates a variable called total, iterates over all the values in arg, and adds them to total. It then returns the result once the iterable has been exhausted. Where to Write the TestTo get started writing tests, you can simply create a file called test.py, which will contain your first test case. Because the file will need to be able to import your application to be able to test it, you want to place test.py above the package folder, so your directory tree will look something like this:```console tree project/project/|-- my_sum| `-- __init__.py`-- test.py1 directory, 2 files```You’ll find that, as you add more and more tests, your single file will become cluttered and hard to maintain, so you can create a folder called tests/ and split the tests into multiple files. It is convention to ensure each file starts with test_ so all test runners will assume that Python file contains tests to be executed. Some very large projects split tests into more subdirectories based on their purpose or usage.Note: What if your application is a single script?> You can import any attributes of the script, such as classes, functions, and variables by using the built-in __import__() function. Instead of from my_sum import sum, you can write the following:> ```python> target = __import__("my_sum.py")> sum = target.sum> ```> > The benefit of using __import__() is that you don’t have to turn your project folder into a package, and you can specify the file name. This is also useful if your filename collides with any standard library packages. For example, math.py would collide with the math module. How to Structure a Simple TestBefore you dive into writing tests, you’ll want to first make a couple of decisions:1. What do you want to test?2. Are you writing a unit test or an integration test?Then the structure of a test should loosely follow this workflow:1. Create your inputs2. Execute the code being tested, capturing the output3. Compare the output with an expected resultFor this application, you’re testing sum(). There are many behaviors in sum() you could check, such as:* Can it sum a list of whole numbers (integers)?* Can it sum a tuple or set?* Can it sum a list of floats?* What happens when you provide it with a bad value, such as a single integer or a string?* What happens when one of the values is negative?The most simple test would be a list of integers. Create a file, test_sum.py with the following Python code:```pythonimport unittestfrom my_sum import sumclass TestSum(unittest.TestCase): def test_list_int(self): """ Test that it can sum a list of integers """ data = [1, 2, 3] result = sum(data) self.assertEqual(result, 6)if __name__ == '__main__': unittest.main()```The file structure:```console tree ././|-- conftest.py|-- my_sum| `-- __init__.py|-- test.py`-- tests `-- test_sum.py2 directories, 4 files```The execution result of pytest:```console pytest tests/collected 1 itemtests/test_sum.py .======= 1 passed in 0.02s =========``` How to Write AssertionsThe last step of writing a test is to validate the output against a known response. This is known as an assertion. There are some general best practices around how to write assertions:* Make sure tests are repeatable and run your test multiple times to make sure it gives the same result every time* Try and assert results that relate to your input data, such as checking that the result is the actual sum of values in the sum() exampleunittest comes with lots of methods to assert on the values, types, and existence of variables. Here are some of the most commonly used methods (more): Side EffectsWhen you’re writing tests, it’s often not as simple as looking at the return value of a function. Often, executing a piece of code will alter other things in the environment, such as the attribute of a class, a file on the filesystem, or a value in a database. These are known as side effects and are an important part of testing. Decide if the side effect is being tested before including it in your list of assertions.If you find that the unit of code you want to test has lots of side effects, you might be breaking the Single Responsibility Principle. Breaking the Single Responsibility Principle means the piece of code is doing too many things and would be better off being refactored. Following the Single Responsibility Principle is a great way to design code that it is easy to write repeatable and simple unit tests for, and ultimately, reliable applications. Executing Your First Test* Executing Test Runners* Understanding Test OutputNow that you’ve created the first test, you want to execute it. Sure, you know it’s going to pass, but before you create more complex tests, you should check that you can execute the tests successfully. Executing Test RunnersThe Python application that executes your test code, checks the assertions, and gives you test results in your console is called the test runner.At the bottom of test.py, you added this small snippet of code:```pythonif __name__ == '__main__': unittest.main()```This is a command line entry point. It means that if you execute the script alone by running python test.py at the command line, it will call unittest.main(). This executes the test runner by discovering all classes in this file that inherit from unittest.TestCase.This is one of many ways to execute the unittest test runner. When you have a single test file named test.py, calling python test.py is a great way to get started. Another way is using the unittest command line. Try this:```sh$ python -m unittest test```This will execute the same test module (called test) via the command line.You can provide additional options to change the output. One of those is -v for verbose. Try that next:```sh$ python -m unittest -v testtest_list_int (test.TestSum) ... ok----------------------------------------------------------------------Ran 1 tests in 0.000s```This executed the one test inside test.py and printed the results to the console. Verbose mode listed the names of the tests it executed first, along with the result of each test.Instead of providing the name of a module containing tests, you can request an auto-discovery using the following:```sh$ python -m unittest discover```This will search the current directory for any files named test*.py and attempt to test them.Once you have multiple test files, as long as you follow the test*.py naming pattern, you can provide the name of the directory instead by using the -s flag and the name of the directory:```sh$ python -m unittest discover -s tests```unittest will run all tests in a single test plan and give you the results.Lastly, if your source code is not in the directory root and contained in a subdirectory, for example in a folder called src/, you can tell unittest where to execute the tests so that it can import the modules correctly with the -t flag:```sh$ python -m unittest discover -s tests -t src```unittest will change to the src/ directory, scan for all test*.py files inside the the tests directory, and execute them. Understanding Test OutputThat was a very simple example where everything passes, so now you’re going to try a failing test and interpret the output.sum() should be able to accept other lists of numeric types, like fractions.At the top of the test.py file, add an import statement to import the Fraction type from the fractions module in the standard library:```pythonfrom fractions import Fraction```Now add a test with an assertion expecting the incorrect value, in this case expecting the sum of 1/4, 1/4, and 2/5 to be 1:```python def test_list_fraction(self): """ Test that it can sum a list of fractions """ data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)] result = sum(data) self.assertEqual(result, 1)```If you execute the tests again with python -m unittest test, you should see the following output:```sh$ python -m unittest testF.======================================================================FAIL: test_list_fraction (test.TestSum)----------------------------------------------------------------------Traceback (most recent call last): File "test.py", line 21, in test_list_fraction self.assertEqual(result, 1)AssertionError: Fraction(9, 10) != 1----------------------------------------------------------------------Ran 2 tests in 0.001sFAILED (failures=1)```In the output, you’ll see the following information:1. The first line shows the execution results of all the tests, one failed (F) and one passed (.).2. The FAIL entry shows some details about the failed test: * The test method name (test_list_fraction) * The test module (test) and the test case (TestSum) * A traceback to the failing line * The details of the assertion with the expected result (1) and the actual result (Fraction(9, 10)) Remember, you can add extra information to the test output by adding the -v flag to the python -m unittest command. More Advanced Testing Scenarios ([back](sect0))* Handling Expected Failures* Isolating Behaviors in Your Application* A simple test case example in using MockBefore you step into creating tests for your application, remember the three basic steps of every test:1. Create your inputs (Arrange)2. Execute the code, capturing the output (Act)3. Compare the output with an expected result (Assertion)It’s not always as easy as creating a static value for the input like a string or a number. Sometimes, your application will require an instance of a class or a context. What do you do then?The data that you create as an input is known as a fixture. It’s common practice to create fixtures and reuse them.If you’re running the same test and passing different values each time and expecting the same result, this is known as parameterization. Handling Expected FailuresEarlier, when you made a list of scenarios to test sum(), a question came up: What happens when you provide it with a bad value, such as a single integer or a string?In this case, you would expect sum() to throw an error. When it does throw an error, that would cause the test to fail.There’s a special way to handle expected errors. You can use .assertRaises() as a context-manager, then inside the with block execute the test steps:```python def test_bad_type(self): data = "banana" with self.assertRaises(TypeError): result = sum(data)```This test case will now only pass if sum(data) raises a TypeError. You can replace TypeError with any exception type you choose. Isolating Behaviors in Your ApplicationEarlier in the tutorial, you learned what a side effect is. Side effects make unit testing harder since, each time a test is run, it might give a different result, or even worse, one test could impact the state of the application and cause another test to fail!There are some simple techniques you can use to test parts of your application that have many side effects:* Refactoring code to follow the Single Responsibility Principle* Mocking out any method or function calls to remove side effects* Using integration testing instead of unit testing for this piece of the applicationIf you’re not familiar with mocking, see Python CLI Testing or Understanding the Python Mock Object Library for some great examples. A simple test case example in using Mockunittest.mock offers a base class for mocking objects called Mock. The use cases for Mock are practically limitless because Mock is so flexible.Consider we have a module to delete file:* mymodule/\_\_init__.py```pythonimport osdef rm(filename): os.remove(filename)```Let's check the test case of it:* tests/test_rm.py```python!/usr/bin/env python -*- coding: utf-8 -*-from mymodule import rmimport os.pathimport tempfileimport unittestclass RmTestCase(unittest.TestCase): tmpfilepath = os.path.join(tempfile.gettempdir(), "tmp-testfile") def setUp(self): with open(self.tmpfilepath, "wb") as f: f.write("Delete me!") def test_rm(self): remove the file rm(self.tmpfilepath) test that it was actually removed self.assertFalse(os.path.isfile(self.tmpfilepath), "Failed to remove the file.")```Our test case is pretty simple, but every time it is run, a temporary file is created and then deleted. Additionally, we have no way of testing whether our rm method properly passes the argument down to the os.remove call. We can assume that it does based on the test above, but much is left to be desired.With help of unittest.mock, we can rewrite our test case in a graceful way:* tests/test_rm_v2.py```pythonfrom mymodule import rmimport mockimport unittestclass RmTestCase(unittest.TestCase): @mock.patch('mymodule.os') def test_rm(self, mock_os): rm("any path") test that rm called os.remove with the right parameters mock_os.remove.assert_called_with("any path")```Here we "patched" the os module imported in mymodule by @mock.patch('mymodule.os') and obtained a Mock object as mock_os. So latter we can use it to do some assertation to confirm the behavior of our testing module.
###Code
!pytest -v project/tests/test_rm_v2.py
###Output
[1m============================= test session starts ==============================[0m
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /root/Github/oo_dp_lesson/env/bin/python
cachedir: .pytest_cache
rootdir: /root/Github/oo_dp_lesson/lessons/Test_and_function_programming_in_Python
[1mcollecting ... [0m[1m
collected 1 item [0m
project/tests/test_rm_v2.py::RmTestCase::test_rm [32mPASSED[0m[32m [100%][0m
[32m============================== [32m[1m1 passed[0m[32m in 0.02s[0m[32m ===============================[0m
###Markdown
More introduction of this topic, please refer to "An Introduction to Mocking in Python". Testing in Multiple Environments ([backe](sect0))* Installing Tox* Configuring Tox for Your Dependencies* Executing ToxSo far, you’ve been testing against a single version of Python using a virtual environment with a specific set of dependencies. You might want to check that your application works on multiple versions of Python, or multiple versions of a package. Tox is an application that automates testing in multiple environments. Installing ToxTox is available on PyPI as a package to install via pip:
###Code
#!pip install tox
###Output
_____no_output_____
###Markdown
Now that you have Tox installed, it needs to be configured. Configuring Tox for Your DependenciesTox is configured via a configuration file in your project directory. The Tox configuration file contains the following:* The command to run in order to execute tests* Any additional packages required before executing* The target Python versions to test againstInstead of having to learn the Tox configuration syntax, you can get a head start by running the quickstart application:```sh$ tox-quickstart```The Tox configuration tool will ask you those questions and create a file similar to the following in tox.ini:```[tox]envlist = py27, py36skipsdist=True[testenv]deps = pytest mockcommands = pytest```Before you can run Tox, it requires that you have a setup.py file in your application folder containing the steps to install your package. If you don’t have one, you can follow this guide on how to create a setup.py before you continue.Alternatively, if your project is not for distribution on PyPI, you can skip this requirement by adding the following line in the tox.ini file under the `[tox]` heading:```[tox]envlist = py27, py36skipsdist=True New```If you don’t create a setup.py, and your application has some dependencies from PyPI, you’ll need to specify those on a number of lines under the `[testenv]` section. For example, Django would require the following:```[testenv]deps = django New```Once you have completed that stage, you’re ready to run the tests.You can now execute Tox, and it will create two virtual environments: one for Python 2.7 and one for Python 3.6. The Tox directory is called .tox/. Within the .tox/ directory, Tox will execute pytest against each virtual environment.You can run this process by calling Tox at the command line:```sh$ tox... py3: commands succeeded congratulations :)```Tox will output the results of your tests against each environment. The first time it runs, Tox takes a little bit of time to create the virtual environments, but once it has, the second execution will be a lot faster. Executing ToxThe output of Tox is quite straightforward. It creates an environment for each version, installs your dependencies, and then runs the test commands. There are some additional command line options that are great to remember.Run only a single environment, such as Python 3.6:```sh$ tox -e py36```Recreate the virtual environments, in case your dependencies have changed or site-packages is corrupt:```sh$ tox -r```Run Tox with less verbose output:```sh$ tox -q```Running Tox with more verbose output:```sh$ tox -v```More information on Tox can be found at the Tox Documentation Website. Automating the Execution of Your Tests ([back](sect0))So far, you have been executing the tests manually by running a command. There are some tools for executing tests automatically when you make changes and commit them to a source-control repository like Git. Automated testing tools are often known as CI/CD tools, which stands for “Continuous Integration/Continuous Deployment.” They can run your tests, compile and publish any applications, and even deploy them into production.Travis CI is one of many available CI (Continuous Integration) services available.Travis CI works nicely with Python, and now that you’ve created all these tests, you can automate the execution of them in the cloud! Travis CI is free for any open-source projects on GitHub and GitLab and is available for a charge for private projects.To get started, login to the website and authenticate with your GitHub or GitLab credentials. Then create a file called .travis.yml with the following contents:```yamllanguage: pythonpython: - "2.7" - "3.7"install: - pip install -r requirements.txtscript: - python -m unittest discover```This configuration instructs Travis CI to:* Test against Python 2.7 and 3.7 (You can replace those versions with any you choose.)* Install all the packages you list in requirements.txt (You should remove this section if you don’t have any dependencies.)* Run python -m unittest discover to run the testsOnce you have committed and pushed this file, Travis CI will run these commands every time you push to your remote Git repository. You can check out the results on their website. What’s Next* Introducing Linters Into Your Application* Passive Linting With flake8* Aggressive Linting With a Code Formatter* Keeping Your Test Code Clean* Testing for Performance Degradation Between Changes* Testing for Security Flaws in Your ApplicationNow that you’ve learned how to create tests, execute them, include them in your project, and even execute them automatically, there are a few advanced techniques you might find handy as your test library grows. Introducing Linters Into Your ApplicationTox and Travis CI have configuration for a test command. The test command you have been using throughout this tutorial is python -m unittest discover. You can provide one or many commands in all of these tools, and this option is there to enable you to add more tools that improve the quality of your application.One such type of application is called a linter. A linter will look at your code and comment on it. It could give you tips about mistakes you’ve made, correct trailing spaces, and even predict bugs you may have introduced.For more information on linters, read the Python Code Quality tutorial. Passive Linting With flake8A popular linter that comments on the style of your code in relation to the PEP 8 specification is flake8.You can install flake8 using pip:
###Code
#!pip install flake8
###Output
_____no_output_____
###Markdown
You can then run flake8 over a single file, a folder, or a pattern:
###Code
!flake8 test_sum_2.py
###Output
test_sum_2.py:4:1: E302 expected 2 blank lines, found 1
test_sum_2.py:7:1: E305 expected 2 blank lines after class or function definition, found 1
###Markdown
You will see a list of errors and warnings for your code that flake8 has found.flake8 is configurable on the command line or inside a configuration file in your project. If you wanted to ignore certain rules, like E302 shown above, you can set them in the configuration. flake8 will inspect a .flake8 file in the project folder or a setup.cfg file. If you decided to use Tox, you can put the flake8 configuration section inside tox.ini.This example ignores the .git and \_\_pycache__ directories as well as the E305 rule. Also, it sets the max line length to 90 instead of 80 characters. You will likely find that the default constraint of 79 characters for line-width is very limiting for tests, as they contain long method names, string literals with test values, and other pieces of data that can be longer. It is common to set the line length for tests to up to 120 characters:```cfg[flake8]ignore = E305exclude = .git,__pycache__max-line-length = 90```Alternatively, you can provide these options on the command line:```sh$ flake8 --ignore E305 --exclude .git,__pycache__ --max-line-length=90```A full list of configuration options is available on the Documentation Website. You can now add flake8 to your CI configuration. For Travis CI, this would look as follows:```yamlmatrix: include: - python: "2.7" script: "flake8"```Travis will read the configuration in .flake8 and fail the build if any linting errors occur. Be sure to add the flake8 dependency to your requirements.txt file. Aggressive Linting With a Code Formatterflake8 is a passive linter: it recommends changes, but you have to go and change the code. A more aggressive approach is a code formatter. Code formatters will change your code automatically to meet a collection of style and layout practices.black is a very unforgiving formatter. It doesn’t have any configuration options, and it has a very specific style. This makes it great as a drop-in tool to put in your test pipeline. You can install black via pip:
###Code
#!pip install black
###Output
_____no_output_____
###Markdown
Then to run black at the command line, provide the file or directory you want to format:```sh$ black test.py``` Keeping Your Test Code CleanWhen writing tests, you may find that you end up copying and pasting code a lot more than you would in regular applications. Tests can be very repetitive at times, but that is by no means a reason to leave your code sloppy and hard to maintain.Over time, you will develop a lot of technical debt in your test code, and if you have significant changes to your application that require changes to your tests, it can be a more cumbersome task than necessary because of the way you structured them.Try to follow the DRY principl when writing tests: Don’t Repeat Yourself.Test fixtures and functions are a great way to produce test code that is easier to maintain. Also, readability counts. Consider deploying a linting tool like flake8 over your test code:```sh$ flake8 --max-line-length=120 tests/``` Testing for Performance Degradation Between Changes ([back](sect0))There are many ways to benchmark code in Python. The standard library provides the timeit module, which can time functions a number of times and give you the distribution. This example will execute test() 100 times and print() the output:```pythondef test(): ... your codeif __name__ == '__main__': import timeit print(timeit.timeit("test()", setup="from __main__ import test", number=100))```Another option, if you decided to use pytest as a test runner, is the pytest-benchmark plugin. This provides a pytest fixture called benchmark. You can pass benchmark() any callable, and it will log the timing of the callable to the results of pytest.You can install pytest-benchmark from PyPI using pip:
###Code
#!pip install pytest-benchmark
###Output
_____no_output_____
###Markdown
Then, you can add a test that uses the fixture and passes the callable to be executed:* tests/test_sum_benchmark.py```pythonfrom my_sum import sumdef test_sum_benchmark(benchmark): hundred_one_list = [1] * 100 result = benchmark(sum, hundred_one_list) assert result == 100```The execution result will look like:More information is available at the Documentation Website. Testing for Security Flaws in Your ApplicationAnother test you will want to run on your application is checking for common security mistakes or vulnerabilities.You can install bandit from PyPI using pip:
###Code
#!pip install bandit
###Output
_____no_output_____
###Markdown
You can then pass the name of your application module with the -r flag, and it will give you a summary:
###Code
!bandit -r my_sum
###Output
[main] INFO profile include tests: None
[main] INFO profile exclude tests: None
[main] INFO cli include tests: None
[main] INFO cli exclude tests: None
[main] INFO running on Python 3.8.10
[95mRun started:2021-07-11 12:48:42.930518[0m
[95m
Test results:[0m
No issues identified.
[95m
Code scanned:[0m
Total lines of code: 0
Total lines skipped (#nosec): 0
[95m
Run metrics:[0m
Total issues (by severity):
Undefined: 0
Low: 0
Medium: 0
High: 0
Total issues (by confidence):
Undefined: 0
Low: 0
Medium: 0
High: 0
[95mFiles skipped (1):[0m
my_sum (No such file or directory)
|
images/better-plots/MultiSeries.ipynb | ###Markdown
When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts.
###Code
import pandas
import numpy
import toyplot
import toyplot.pdf
import toyplot.png
import toyplot.svg
print('Pandas version: ', pandas.__version__)
print('Numpy version: ', numpy.__version__)
print('Toyplot version: ', toyplot.__version__)
###Output
Pandas version: 0.19.2
Numpy version: 1.12.0
Toyplot version: 0.14.0-dev
###Markdown
Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https://archive.ics.uci.edu/ml/datasets/Auto+MPG.The data are stored in a text file containing columns of data. We use the pandas.read_table() method to parse the data and load it in a pandas DataFrame. The file does not contain a header row, so we need to specify the names of the columns manually.
###Code
column_names = ['MPG',
'Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin',
'Car Name']
data = pandas.read_table('auto-mpg.data',
delim_whitespace=True,
names=column_names,
index_col=False)
###Output
_____no_output_____
###Markdown
The origin column indicates the country of origin for the car manufacture. It has three numeric values, 1, 2, or 3. These indicate USA, Europe, or Japan, respectively. Replace the origin column with a string representing the country name.
###Code
country_map = pandas.Series(index=[1,2,3],
data=['USA', 'Europe', 'Japan'])
data['Origin'] = numpy.array(country_map[data['Origin']])
###Output
_____no_output_____
###Markdown
In this plot we are going to show the trend of the average miles per gallon (MPG) rating for subsequent model years separated by country of origin. This time period saw a significant increase in MPG driven by the U.S. fuel crisis. We can use the pivot_table feature of pandas to get this information from the data. (Excel and other spreadsheets have similar functionality.)
###Code
average_mpg_per_year = data.pivot_table(index='Model Year',
columns='Origin',
values='MPG',
aggfunc='mean')
average_mpg_per_year
average_mpg_per_year.columns
###Output
_____no_output_____
###Markdown
Now use toyplot to plot this trend on a standard x-y chart.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Toyplot is sometimes inaccurate in judging the width of labels.
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries.pdf')
toyplot.svg.render(canvas, 'MultiSeries.svg')
toyplot.png.render(canvas, 'MultiSeries.png', scale=5)
###Output
_____no_output_____
###Markdown
For the talk, I want to compare this to using a 3D plot. Toyplot does not yet have such silly plot capabilities, so write out the results of this pivot table to csv so we can easily load it into Excel.
###Code
average_mpg_per_year.to_csv('auto-mpg-origin-year.csv')
###Output
_____no_output_____
###Markdown
In one of my counterexamples, I remind the audiance to make colors consistent. Make a plot with inconsistent colors.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
for column in ['Europe', 'Japan', 'USA']:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Toyplot is sometimes inaccurate in judging the width of labels.
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Inconsistent_Colors.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Inconsistent_colors.svg')
toyplot.png.render(canvas, 'MultiSeries_Inconsistent_colors.png', scale=5)
###Output
_____no_output_____
###Markdown
I make a point that it is a bad idea to clutter up the canvas with non-data items like grid lines. Create a counter example that has lots of distracting lines.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
# Create some grid lines. (Not a great idea.)
axes.hlines(xrange(0,41,5), color='black')
axes.vlines(xrange(1970,1983), color='black')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Toyplot is sometimes inaccurate in judging the width of labels.
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Grid_Dark.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Grid_Dark.svg')
toyplot.png.render(canvas, 'MultiSeries_Grid_Dark.png', scale=5)
###Output
_____no_output_____
###Markdown
If you really want gridlines, you should make them very subtle so they don't interfere with the actual data.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
# Create some grid lines. (Not a great idea.)
axes.hlines(xrange(0,41,5), color='lightgray')
axes.vlines(xrange(1970,1983), color='lightgray')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Grid_Light.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Grid_Light.svg')
toyplot.png.render(canvas, 'MultiSeries_Grid_Light.png', scale=5)
###Output
_____no_output_____
###Markdown
Frankly, vertical gridlines are usually not all that necessary. If you remove them, less clutter. Not going overboard on horizontal lines is also good.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-1,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
# Create some grid lines. (Not a great idea.)
axes.hlines(xrange(0,41,10), color='lightgray')
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
axes.plot(x, y)
axes.text(x[-1], y[-1], column,
style={"text-anchor":"start",
"-toyplot-anchor-shift":"2px"})
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
axes.x.domain.max = 1984
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
toyplot.pdf.render(canvas, 'MultiSeries_Grid_Light_Fewer.pdf')
toyplot.svg.render(canvas, 'MultiSeries_Grid_Light_Fewer.svg')
toyplot.png.render(canvas, 'MultiSeries_Grid_Light_Fewer.png', scale=5)
###Output
_____no_output_____
###Markdown
I personally find grid lines a bit overrated. Don't fear not having grid lines at all, as in the first example. Another pet peeve of my is legends. I hate them. They are stupid and only exist because those who make plots are too lazy to place labels well (and because that is hard). But if you use a legend, at least make sure the order in the legend is not inconsistent with the order of the data.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-11,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
marks = {}
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
marks[column] = axes.plot(x, y)
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
canvas.legend([('USA', marks['USA']),
('Europe', marks['Europe']),
('Japan', marks['Japan'])],
rect=('-1in', '-1.25in', '1in', '0.75in'))
toyplot.pdf.render(canvas, 'Legend_Backward.pdf')
toyplot.svg.render(canvas, 'Legend_Backward.svg')
toyplot.png.render(canvas, 'Legend_Backward.png', scale=5)
###Output
_____no_output_____
###Markdown
Do it again, but at least order the legend correctly.
###Code
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-11,6,-43),
xlabel = 'Model Year',
ylabel = 'Average MPG')
marks = {}
for column in country_map:
series = average_mpg_per_year[column]
x = series.index + 1900
y = numpy.array(series)
marks[column] = axes.plot(x, y)
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# The labels can make for odd tick placement.
# Place them manually
axes.x.ticks.locator = \
toyplot.locator.Explicit([1970,1974,1978,1982])
canvas.legend([('Europe', marks['Europe']),
('Japan', marks['Japan']),
('USA', marks['USA'])],
rect=('-1in', '-1.25in', '1in', '0.75in'))
toyplot.pdf.render(canvas, 'Legend_OK.pdf')
toyplot.svg.render(canvas, 'Legend_OK.svg')
toyplot.png.render(canvas, 'Legend_OK.png', scale=5)
###Output
_____no_output_____ |
crawling/brunch_crawling.ipynb | ###Markdown
필요한 library 설치
###Code
!pip install selenium
!pip install beautifulsoup4
!pip install pickle-mixin
!pip install requests
###Output
_____no_output_____
###Markdown
크롤링 시작
###Code
#selenium 불러오기
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import pandas as pd
import time
import requests
import re
# options to look like a human
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox") #관리자 모드에서 접근을 할 때 필요
#기계라고 생각되면 접속을 차단할 수도 있음 따라서 옵션을 줌
options.add_argument("window-size=1920x1080")
options.add_argument("lang=ko_KR")
options.add_argument("user-agent=Chrome/89.0.4389.114")
# to save error log
service_args = ['--verbose']
service_log_path = "./chromedriver.log" #에러가 났을 때 log 찍을 수 있게 함
#chrome창 열기
driver = webdriver.Chrome(executable_path ="./chromedriver",
options = options,
service_args = service_args,
service_log_path = service_log_path)
###Output
_____no_output_____
###Markdown
키워드를 통해 검색 + 브런치가 정한 키워드와 유저가 직접 키워드를 지정해 검색하는 방법을 나눈다.+ 브런치는 스크롤을 통해 글들을 불러오는 형식이므로 스크롤을 자동으로 내려주는 코드를 추가한다.+ 키워드에 해당하는 글을 가져오기보다 글들의 url을 저장한다.
###Code
def brunch_url_keyword(keyword, user_selected = True):
if user_selected:
url = "https://brunch.co.kr/search?q="+ keyword
else:
url = "https://brunch.co.kr/keyword/" + keyword + "?q=g"
driver.get(url)
# 스크롤 높이
last_height = driver.execute_script("return document.body.scrollHeight")
for i in range(1000):
# 스크롤 무빙
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# 페이지 로드 대기
time.sleep(3)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight-50);")
time.sleep(3)
# 새로운 스크롤 동작 높이와 이전 스크롤과의 비교
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
source = driver.page_source
data = source.encode('utf-8')
bs = BeautifulSoup(data, 'html.parser')
driver.quit()
urls = bs.select('#wrapArticle > div.wrap_article_list.\#keyword_related_contents > ul > li')
print(len(urls))
# 파일로 저장
filename = keyword + "_url.txt"
f = open(filename, 'w')
for val in urls:
data = val + "\n"
f.write(data)
f.close()
return urls
# 브런치가 정해놓은 키워드로 검색
brunch_url_keyword("감성_에세이",False)
brunch_url_keyword("문화·예술",False)
brunch_url_keyword("취향저격_영화_리뷰",False)
brunch_url_keyword("사랑·이별",False)
# 유저가 직접 키워드를 검색
brunch_url_keyword("기쁨")
brunch_url_keyword("슬픔")
brunch_url_keyword("분노")
brunch_url_keyword("공포")
brunch_url_keyword("사랑")
def read_url(keyword):
file_name = './브런치데이터/'+ keyword + "_url.txt"
b = []
f = open(file_name, 'r')
a = f.readlines()
for l in a:
before = l.replace('\n', '')
b.append(before)
return b
def remove_emoji(string):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002500-\U00002BEF" # chinese char
u"\U00002702-\U000027B0"
u"\U00002702-\U000027B0"
# u"\U000024C2-\U0001F251"
u"\U0001f926-\U0001f937"
u"\U00010000-\U0010ffff"
u"\u2640-\u2642"
u"\u2600-\u2B55"
u"\u200d"
u"\u23cf"
u"\u23e9"
u"\u231a"
u"\ufe0f" # dingbats
u"\u3030"
u"\xa0"
u"\ucdee"
u'\ude0a'
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', str(string))
def get_rawText_req(url_list):
doc_df = pd.DataFrame(columns = ['text'])
for url in url_list:
#각 url로 글에 접근
req = requests.get(url)
html = req.text
time.sleep(0.03)
data = html.encode('utf-8')
bs = BeautifulSoup(data, 'html.parser')
#글 가져오기
doc = bs.select('body > div.service_contents.article_contents > div.wrap_view_article > div.wrap_body')
raw_doc = ""
if not doc:
continue
elif doc[0].select('h4') != []:
for d in doc[0].select('h4'):
par = d.get_text().replace(u'xa0', ' ').replace(' ',' ').replace(u'\udd25', ' ').replace(u'\ucdee', ' ')
par = remove_emoji(par)
par = re.compile('[^가-힣0-9ㄱ-ㅎㅏ-ㅣ\.\?\!,^]+').sub(' ', par)
raw_doc = raw_doc + str(par)
elif doc[0].select('p') != []:
for d in doc[0].select('p'):
par = d.get_text().replace(u'xa0', ' ').replace(' ',' ').replace(u'\udd25', ' ').replace(u'\ucdee', ' ')
par = remove_emoji(par)
par = re.compile('[^가-힣0-9ㄱ-ㅎㅏ-ㅣ\.\?\!,^]+').sub(' ', par)
raw_doc = raw_doc + str(par)
#dataframe에 append
print(raw_doc + "\n")
doc_df = doc_df.append({'text' : raw_doc}, ignore_index = True)
time.sleep(0.05)
print(doc_df)
return doc_df.drop_duplicates()
get_rawText_req(read_url('url_scary_keyword.txt')).to_excel('scary.xlsx')
get_rawText_req(read_url('url_love_and_farewell.txt')).to_excel('love_farewell.xlsx')
get_rawText_req(read_url('url_movie_review.txt')).to_excel('movie_review.xlsx')
get_rawText_req(read_url('url_senti_essay.txt')).to_excel('senti_essay.xlsx')
get_rawText_req(read_url('url_happy_keyword.txt')).to_excel('happy.xlsx')
get_rawText_req(read_url('url_angry_keyword.txt')).to_excel('angry.xlsx')
get_rawText_req(read_url('url_sad_keyword.txt')).to_excel('sad.xlsx')
###Output
_____no_output_____ |
tutorial/322_protein.ipynb | ###Markdown
Protein FoldingsNow we try a simple example from the paperFinding low-energy conformations of lattice protein models by quantum annealingAlejandro Perdomo-Ortiz, Neil Dickson, Marshall Drew-Brook, Geordie Rose & Alán Aspuru-GuzikScientific Reports volume 2, Article number: 571 (2012)https://www.nature.com/articles/srep00571 OverviewThis is solving a simple HP model and Mijazawa-Jernigan(MJ) model of protein model MJ model to QUBOExpressing turn using 2 binary values.引用:https://www.nature.com/articles/srep00571This time we use PSVKMA of amino acid.There are some inter action if some acid locate next to specific acid.By using this rules we solve QUBO as minimum function.The whole calculation is complicated so we now solve only a part of it separated to some scheme.引用:https://www.nature.com/articles/srep00571Finally we try to find the most stable state by the QUBO calcuration.引用:https://www.nature.com/articles/srep00571 Model and schemeNow we started from the scheme that we already have PSVK and just find out the location of M and A. The list of rotation start from 010 and we already have PSVK and M takes only two possibility so we have now the cost function as Now we try to find value of 3 qubits. We have the cost function from the paper, Boolean reduction of 3-body interaction to 2-body interactionWe have to reduce the dimension of the equation using some mathematical technique.Using q4 we have, adding a penalty term we have, By using blueqat we try to solve this equation. Solving using blueqatNow we have delta = 10 and just put the cost funtion
###Code
!pip install -U blueqat
import blueqat.wq as wq
a = wq.Opt()
d = 10;
a.qubo = [[0,d,9,-2*d],[0,0,9,-2*d],[0,0,-4,-16],[0,0,0,3*d]]
a.sa()
###Output
1.5831871032714844
###Markdown
Protein FoldingsNow we try a simple example from the paperFinding low-energy conformations of lattice protein models by quantum annealingAlejandro Perdomo-Ortiz, Neil Dickson, Marshall Drew-Brook, Geordie Rose & Alán Aspuru-GuzikScientific Reports volume 2, Article number: 571 (2012)https://www.nature.com/articles/srep00571 OverviewThis is solving a simple HP model and Mijazawa-Jernigan(MJ) model of protein model MJ model to QUBOExpressing turn using 2 binary values.引用:https://www.nature.com/articles/srep00571This time we use PSVKMA of amino acid.There are some inter action if some acid locate next to specific acid.By using this rules we solve QUBO as minimum function.The whole calculation is complicated so we now solve only a part of it separated to some scheme.引用:https://www.nature.com/articles/srep00571Finally we try to find the most stable state by the QUBO calcuration.引用:https://www.nature.com/articles/srep00571 Model and schemeNow we started from the scheme that we already have PSVK and just find out the location of M and A. The list of rotation start from 010 and we already have PSVK and M takes only two possibility so we have now the cost function as Now we try to find value of 3 qubits. We have the cost function from the paper, Boolean reduction of 3-body interaction to 2-body interactionWe have to reduce the dimension of the equation using some mathematical technique.Using q4 we have, adding a penalty term we have, By using blueqat we try to solve this equation. Solving using blueqatNow we have delta = 10 and just put the cost funtion
###Code
import numpy as np
from blueqat import vqe
from blueqat.pauli import qubo_bit as q
h = -1 -4*q(2)+9*q(0)*q(2)+9*q(1)*q(2)-16*q(2)*q(3)+10*(3*q(3)+q(0)*q(1)-2*q(0)*q(3)-2*q(1)*q(3))
step = 100
result = vqe.Vqe(vqe.QaoaAnsatz(h, step)).run()
print(result.most_common(12))
###Output
(((0, 0, 1, 0), 0.9998531901935136), ((0, 1, 0, 0), 4.638378682544372e-05), ((1, 0, 0, 0), 4.638378682543682e-05), ((0, 1, 1, 1), 1.5145464068886668e-05), ((1, 0, 1, 1), 1.514546406888327e-05), ((1, 1, 0, 1), 1.0850435861243557e-05), ((0, 0, 0, 0), 6.103191660791661e-06), ((1, 1, 0, 0), 2.0219941454079172e-06), ((0, 0, 0, 1), 1.6139506990286885e-06), ((0, 0, 1, 1), 1.4122868189391485e-06), ((1, 1, 1, 1), 8.045082057090799e-07), ((0, 1, 1, 0), 3.977024947978311e-07))
|
Udemy/Refactored_Py_DS_ML_Bootcamp-master/06-Data-Visualization-with-Seaborn/01-Distribution Plots.ipynb | ###Markdown
___ ___ Distribution PlotsLet's discuss some plots that allow us to visualize the distribution of a data set. These plots are:* distplot* jointplot* pairplot* rugplot* kdeplot ___ Imports
###Code
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataSeaborn comes with built-in data sets!
###Code
tips = sns.load_dataset('tips')
tips.head()
###Output
_____no_output_____
###Markdown
distplotThe distplot shows the distribution of a univariate set of observations.
###Code
sns.distplot(tips['total_bill'])
# Safe to ignore warnings
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
To remove the kde layer and just have the histogram use:
###Code
sns.distplot(tips['total_bill'],kde=False,bins=30)
###Output
_____no_output_____
###Markdown
jointplotjointplot() allows you to basically match up two distplots for bivariate data. With your choice of what **kind** parameter to compare with: * “scatter” * “reg” * “resid” * “kde” * “hex”
###Code
sns.jointplot(x='total_bill',y='tip',data=tips,kind='scatter')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='hex')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='reg')
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
pairplotpairplot will plot pairwise relationships across an entire dataframe (for the numerical columns) and supports a color hue argument (for categorical columns).
###Code
sns.pairplot(tips)
sns.pairplot(tips,hue='sex',palette='coolwarm')
###Output
_____no_output_____
###Markdown
rugplotrugplots are actually a very simple concept, they just draw a dash mark for every point on a univariate distribution. They are the building block of a KDE plot:
###Code
sns.rugplot(tips['total_bill'])
###Output
_____no_output_____
###Markdown
kdeplotkdeplots are [Kernel Density Estimation plots](http://en.wikipedia.org/wiki/Kernel_density_estimationPractical_estimation_of_the_bandwidth). These KDE plots replace every single observation with a Gaussian (Normal) distribution centered around that value. For example:
###Code
# Don't worry about understanding this code!
# It's just for the diagram below
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#Create dataset
dataset = np.random.randn(25)
# Create another rugplot
sns.rugplot(dataset);
# Set up the x-axis for the plot
x_min = dataset.min() - 2
x_max = dataset.max() + 2
# 100 equally spaced points from x_min to x_max
x_axis = np.linspace(x_min,x_max,100)
# Set up the bandwidth, for info on this:
url = 'http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth'
bandwidth = ((4*dataset.std()**5)/(3*len(dataset)))**.2
# Create an empty kernel list
kernel_list = []
# Plot each basis function
for data_point in dataset:
# Create a kernel for each point and append to list
kernel = stats.norm(data_point,bandwidth).pdf(x_axis)
kernel_list.append(kernel)
#Scale for plotting
kernel = kernel / kernel.max()
kernel = kernel * .4
plt.plot(x_axis,kernel,color = 'grey',alpha=0.5)
plt.ylim(0,1)
# To get the kde plot we can sum these basis functions.
# Plot the sum of the basis function
sum_of_kde = np.sum(kernel_list,axis=0)
# Plot figure
fig = plt.plot(x_axis,sum_of_kde,color='indianred')
# Add the initial rugplot
sns.rugplot(dataset,c = 'indianred')
# Get rid of y-tick marks
plt.yticks([])
# Set title
plt.suptitle("Sum of the Basis Functions")
###Output
_____no_output_____
###Markdown
So with our tips dataset:
###Code
sns.kdeplot(tips['total_bill'])
sns.rugplot(tips['total_bill'])
sns.kdeplot(tips['tip'])
sns.rugplot(tips['tip'])
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
|
Lestrygonians Part 4.ipynb | ###Markdown
Analyzing Ulysses with NLTK: Lestrygonians (Ch. 8) Part IV: Wordplay Table of Contents* [Introduction](intro)* [Tokenizing Without Punctuation](tokenizing_wo_punctuation)* [Method 1: TokenSearcher Object](tokensearcher)* [Method 2: Bigram Splitting Method](bigram_splitting)* [Functionalizing Bigram Search Methods](functionalizing) IntroductionIn this notebook we'll analyze some of Joyce's wordplay in Ulysses, using more complicated regular expressions. Tokenizing Without PunctuationTo tokenize the chapter and throw out the punctuation, we can use the regular expression `\w+`. Note that this will split up contractions like "can't" into `["can","t"]`.
###Code
%matplotlib inline
import nltk, re, io
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib.pylab import *
txtfile = 'txt/08lestrygonians.txt'
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
with io.open(txtfile) as f:
tokens = tokenizer.tokenize(f.read())
print tokens[1000:1020]
print tokenizer.tokenize("can't keep a contraction together!")
###Output
['can', 't', 'keep', 'a', 'contraction', 'together']
###Markdown
Method 1: TokenSearcher ObjectThe first method for searching for regular expressions in a set of tokens is the TokenSearcher object. This can be fed a regular expression that searches across tokens, and it will search through each token. This provides a big advantage: we don't have to manually break all of our tokens into n-grams ourselves, we can just let the TokenSearcher do the hard work. Here's an example of how to create and call that object:
###Code
tsearch = nltk.TokenSearcher(tokens)
s_s_ = tsearch.findall(r'<s.*> <.*> <s.*> <.*> <.*>')
print len(s_s_)
for s in s_s_:
print ' '.join(s)
###Output
78
scotch A sugarsticky girl shovelling
selling off some old furniture
saw flapping strongly wheeling between
seabirds gulls seagoose Swans from
sound She s not exactly
sandwichmen marched slowly towards him
street after street Just keep
smart girls sitting inside writing
suited her small head Sister
she If she had married
some sticky stuff Flies picnic
she looked soaped all over
saint Kevin s parade Pen
s womaneyes said melancholily Now
said He s a caution
said He s always bad
speak Look straight in her
serge dress she had two
sugary flour stuck to her
simply Child s head too
something to stop that Life
she had so many children
said The spoon of pap
still All skedaddled Why he
squad Turnkey s daughter got
sun slowly shadowing Trinity s
say Other steps into his
spewed Provost s house The
s uniform since he got
say it s healthier Windandwatery
sweating Irish stew into their
s daughter s bag and
some king s mistress His
s bottle shoulders On his
street west something changed Could
s corner still pursued Jingling
shovelled gurgling soup down his
stewgravy with sopping sippets of
server gathered sticky clattering plates
second helping stared towards the
split their skulls open Moo
sheepsnouts bloodypapered snivelling nosejam on
smokinghot thick sugary Famished ghosts
something the somethings of the
some fellow s digestion Religions
sandwich Yes sir Like a
s the style Who s
sandwich into slender strips Mr
see Part shares and part
strongly to speed it set
said He s the organiser
snuffled and scratched Flea having
such and such replete Too
strips of sandwich fresh clean
s no straight sport going
soaked and softened rolled pith
sturgeon high sheriff Coffey the
soup Geese stuffed silly for
s the same fish perhaps
sky No sound The sky
see Never speaking I mean
something fall see if she
said They stick to you
said He s a safe
say He s not too
sake What s yours Tom
said Certainly sir Paddy Leonard
said with scorn Mr Byrne
said A suckingbottle for the
sweet then savoury Mr Bloom
s confectioner s window of
said Molesworth street is opposite
street different smell Each person
spring the summer smells Tastes
shameless not seeing That girl
school I sentenced him to
sunlight Tan shoes Turnedup trousers
stuck Ah soap there I
###Markdown
Method 2: Bigram Splitting MethodAnother way of searching for patterns, one that may be needed if we want to use criteria that would be hard to implement with a regular expression (such as finding two words that are the same length next to each other), is to assemble all of the tokens into bigrams.Suppose we are looking for two words that start with the same letter. We can do this by iterating through a set of bigrams (we'll use a built-in NLTK object to generate bigrams), and apply our search criteria to the first and second words independently. To create bigrams, we'll use the `nltk.bigrams()` method, feeding it a list of tokens.When we do this, we can see there's a lot of alliteration in this chapter.
###Code
def printlist(the_list):
for item in the_list:
print item
alliteration = []
for (i,j) in nltk.bigrams(tokens):
if i[:1]==j[:1]:
alliteration.append( ' '.join([i,j]) )
print "Found",len(alliteration),"pairs of words starting with the same letter:"
printlist(alliteration[:10])
printlist(alliteration[-10:])
lolly = []
for (i,j) in nltk.bigrams(tokens):
if len( re.findall('ll',i) )>0:
if len( re.findall('l',j) )>0:
lolly.append( ' '.join([i,j]) )
elif len( re.findall('ll',j) )>0:
if len( re.findall('l',i) )>0:
lolly.append(' '.join([i,j]) )
print "Found",len(lolly),"pairs of words, one containing 'll' and the other containing 'l':"
print "First 25:"
printlist(lolly[:25])
lolly = []
for (i,j) in nltk.bigrams(tokens):
if len( re.findall('rr',i) )>0:
if len( re.findall('r',j) )>0:
lolly.append( ' '.join([i,j]) )
elif len( re.findall('rr',j) )>0:
if len( re.findall('r',i) )>0:
lolly.append(' '.join([i,j]) )
print "Found",len(lolly),"pairs of words, one containing 'r' and the other containing 'r':"
printlist(lolly)
###Output
Found 22 pairs of words, one containing 'r' and the other containing 'r':
daguerreotype atelier
supperroom or
from Harrison
terrible for
Farrell Mr
Weightcarrying huntress
or fivebarred
Dr Murren
marching irregularly
irregularly rounded
suburbs jerrybuilt
jerrybuilt Kerwan
garden Terrific
Portobello barracks
artificial irrigation
irrigation Bleibtreustrasse
dropping currants
currants Screened
whispered Prrwht
ravenous terrier
Earlsfort terrace
Where Hurry
###Markdown
Functionalizing Bigram SearchesWe can functionalize the search for patterns with a single and double character shared, i.e., `dropping currants` (the letter r).
###Code
def double_letter_alliteration(c,tokens):
"""
This function finds all occurrences of double-letter and single-letter
occurrences of the character c.
This function is called by all_double_letter_alliteration().
"""
allall = []
for (i,j) in nltk.bigrams(tokens):
if len( re.findall(c+c,i) )>0:
if len( re.findall(c,j) )>0:
lolly.append( ' '.join([i,j]) )
elif len( re.findall(c+c,j) )>0:
if len( re.findall(c,i) )>0:
allall.append(' '.join([i,j]) )
return allall
###Output
_____no_output_____
###Markdown
Now we can use this function to search for the single-double letter pattern individually, or we can define a function that will loop over all 26 letters to find all matching patterns.
###Code
printlist(double_letter_alliteration('r',tokens))
printlist(double_letter_alliteration('o',tokens))
import string
def all_double_letter_alliteration(tokens):
all_all = []
alphabet = list(string.ascii_lowercase)
for aleph in alphabet:
results = double_letter_alliteration(aleph,tokens)
print "Matching",aleph,":",len(results)
all_all += results
return all_all
allall = all_double_letter_alliteration(tokens)
print len(allall)
###Output
Matching a : 1
Matching b : 3
Matching c : 4
Matching d : 8
Matching e : 109
Matching f : 1
Matching g : 5
Matching h : 1
Matching i : 0
Matching j : 0
Matching k : 0
Matching l : 47
Matching m : 1
Matching n : 16
Matching o : 59
Matching p : 1
Matching q : 0
Matching r : 13
Matching s : 31
Matching t : 38
Matching u : 0
Matching v : 0
Matching w : 0
Matching x : 0
Matching y : 0
Matching z : 0
338
###Markdown
That's a mouthful of alliteration! We can compare the number of words that matched this (one, single) search for examples of alliteration to the total number of words in the chapter:
###Code
double(len(allall))/len(tokens)
###Output
_____no_output_____
###Markdown
Holy cow - 2.6% of the chapter is just this one alliteration pattern, of having two neighbor words: one with a double letter, and one with a single letter.
###Code
print len(allall)
printlist(allall[:20])
###Output
338
bawling maaaaaa
ball bobbed
bob Bubble
buckets wobbly
collecting accounts
Scotch accent
Scotch accent
crown Accept
had plodded
dumdum Diddlediddle
and bidding
remembered Hidden
naked goddesses
said Paddy
standing Paddy
Rochford nodded
bluey greeny
goes Fifteen
they feel
They wheeled
###Markdown
Let's look at the pattern taken one step further: we'll look for double letters in neighbor words.
###Code
def match_double(aleph,tokens):
matches = []
for (i,j) in nltk.bigrams(tokens):
if len( re.findall(aleph+aleph,i) )>0:
if len( re.findall(aleph+aleph,j) )>0:
matches.append(' '.join([i,j]))
return matches
def double_double(tokens):
dd = []
alphabet = list(string.ascii_lowercase)
for aleph in alphabet:
results = match_double(aleph, tokens)
print "Matching %s%s: %d"%(aleph,aleph,len(results))
dd += results
return dd
print "Neighbor words with double letters:"
dd = double_double(tokens)
printlist(dd)
###Output
Neighbor words with double letters:
Matching aa: 0
Matching bb: 0
Matching cc: 0
Matching dd: 0
Matching ee: 5
Matching ff: 2
Matching gg: 0
Matching hh: 0
Matching ii: 0
Matching jj: 0
Matching kk: 0
Matching ll: 15
Matching mm: 0
Matching nn: 2
Matching oo: 4
Matching pp: 3
Matching qq: 0
Matching rr: 0
Matching ss: 1
Matching tt: 1
Matching uu: 0
Matching vv: 0
Matching ww: 0
Matching xx: 0
Matching yy: 0
Matching zz: 0
wheeling between
Fleet street
Three cheers
greens See
green cheese
scruff off
sheriff Coffey
quaywalls gulls
parallel parallax
wallpaper Dockrell
Tisdall Farrell
belly swollen
still All
Silly billies
ll tell
swollen belly
Wellmannered fellow
ball falls
full All
Kill Kill
numbskull Will
William Miller
Penny dinner
canny Cunning
looks too
Goosestep Foodheated
loonies mooching
Moo Poor
Happy Happier
Happy Happy
sopping sippets
pressed grass
platt butter
###Markdown
AcronymsLet's take a look at some acronyms. For this application, it might be better to tokenize by sentence, and extract acronyms for sentences.
###Code
with io.open(txtfile) as f:
sentences = nltk.sent_tokenize(f.read())
print len(sentences)
acronyms = []
for s in sentences:
s2 = re.sub('\n',' ',s)
words = s2.split(" ")
acronym = ''.join(w[0] for w in words if w<>u'')
acronyms.append(acronym)
print len(acronyms)
print "-"*20
printlist(acronyms[:10])
print "-"*20
printlist(sentences[:10]) # <-- contains newlines, but removed to create acronyms
from nltk.corpus import words
acronyms[101:111]
###Output
_____no_output_____ |
site/en/r1/guide/keras.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.md) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](./summaries_and_tensorboard.md).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.md) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.mdbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating_estimators_from_keras_models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.md) for debugging[Estimator input functions](./premade_estimators.mdcreate_input_functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.contrib.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.contrib.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.contrib.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.contrib.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.contrib.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.contrib.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supports theJSON serialization format:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supports theJSON serialization format:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. This defaults to the `"Glorot uniform"` initializer.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Keras Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. Keras is a high-level API to build and train deep learning models. It's used forfast prototyping, advanced research, and production, with three key advantages:- *User friendly* Keras has a simple, consistent interface optimized for common use cases. It provides clear and actionable feedback for user errors.- *Modular and composable* Keras models are made by connecting configurable building blocks together, with few restrictions.- *Easy to extend* Write custom building blocks to express new ideas for research. Create new layers, loss functions, and develop state-of-the-art models. Import tf.keras`tf.keras` is TensorFlow's implementation of the[Keras API specification](https://keras.io). This is a high-levelAPI to build and train models that includes first-class support forTensorFlow-specific functionality, such as [eager execution](eager_execution),`tf.data` pipelines, and [Estimators](./estimators.md).`tf.keras` makes TensorFlow easier to use without sacrificing flexibility andperformance.To get started, import `tf.keras` as part of your TensorFlow program setup:
###Code
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
###Output
_____no_output_____
###Markdown
`tf.keras` can run any Keras-compatible code, but keep in mind:* The `tf.keras` version in the latest TensorFlow release might not be the same as the latest `keras` version from PyPI. Check `tf.keras.__version__`.* When [saving a model's weights](weights_only), `tf.keras` defaults to the [checkpoint format](./checkpoints.md). Pass `save_format='h5'` to use HDF5. Build a simple model Sequential modelIn Keras, you assemble *layers* to build *models*. A model is (usually) a graphof layers. The most common type of model is a stack of layers: the`tf.keras.Sequential` model.To build a simple, fully-connected network (i.e. multi-layer perceptron):
###Code
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Configure the layersThere are many `tf.keras.layers` available with some common constructorparameters:* `activation`: Set the activation function for the layer. This parameter is specified by the name of a built-in function or as a callable object. By default, no activation is applied.* `kernel_initializer` and `bias_initializer`: The initialization schemes that create the layer's weights (kernel and bias). This parameter is a name or a callable object. The kernel defaults to the `"Glorot uniform"` initializer, and the bias defaults to zeros.* `kernel_regularizer` and `bias_regularizer`: The regularization schemes that apply the layer's weights (kernel and bias), such as L1 or L2 regularization. By default, no regularization is applied.The following instantiates `tf.keras.layers.Dense` layers using constructorarguments:
###Code
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
###Output
_____no_output_____
###Markdown
Train and evaluate Set up trainingAfter the model is constructed, configure its learning process by calling the`compile` method:
###Code
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
`tf.keras.Model.compile` takes three important arguments:* `optimizer`: This object specifies the training procedure. Pass it optimizer instances from the `tf.train` module, such as `tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or `tf.train.GradientDescentOptimizer`.* `loss`: The function to minimize during optimization. Common choices include mean square error (`mse`), `categorical_crossentropy`, and `binary_crossentropy`. Loss functions are specified by name or by passing a callable object from the `tf.keras.losses` module.* `metrics`: Used to monitor training. These are string names or callables from the `tf.keras.metrics` module.The following shows a few examples of configuring a model for training:
###Code
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Input NumPy dataFor small datasets, use in-memory [NumPy](https://www.numpy.org/)arrays to train and evaluate a model. The model is "fit" to the training datausing the `fit` method:
###Code
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
###Output
_____no_output_____
###Markdown
`tf.keras.Model.fit` takes three important arguments:* `epochs`: Training is structured into *epochs*. An epoch is one iteration over the entire input data (this is done in smaller batches).* `batch_size`: When passed NumPy data, the model slices the data into smaller batches and iterates over these batches during training. This integer specifies the size of each batch. Be aware that the last batch may be smaller if the total number of samples is not divisible by the batch size.* `validation_data`: When prototyping a model, you want to easily monitor its performance on some validation data. Passing this argument—a tuple of inputs and labels—allows the model to display the loss and metrics in inference mode for the passed data, at the end of each epoch.Here's an example using `validation_data`:
###Code
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Input tf.data datasetsUse the [Datasets API](./datasets.md) to scale to large datasetsor multi-device training. Pass a `tf.data.Dataset` instance to the `fit`method:
###Code
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
###Output
_____no_output_____
###Markdown
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number oftraining steps the model runs before it moves to the next epoch. Since the`Dataset` yields batches of data, this snippet does not require a `batch_size`.Datasets can also be used for validation:
###Code
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
###Output
_____no_output_____
###Markdown
Evaluate and predictThe `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPydata and a `tf.data.Dataset`.To *evaluate* the inference-mode loss and metrics for the data provided:
###Code
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
###Output
_____no_output_____
###Markdown
And to *predict* the output of the last layer in inference for the data provided,as a NumPy array:
###Code
result = model.predict(data, batch_size=32)
print(result.shape)
###Output
_____no_output_____
###Markdown
Build advanced models Functional API The `tf.keras.Sequential` model is a simple stack of layers that cannotrepresent arbitrary models. Use the[Keras functional API](https://keras.io/getting-started/functional-api-guide/)to build complex model topologies such as:* Multi-input models,* Multi-output models,* Models with shared layers (the same layer called several times),* Models with non-sequential data flows (e.g. residual connections).Building a model with the functional API works like this:1. A layer instance is callable and returns a tensor.2. Input tensors and output tensors are used to define a `tf.keras.Model` instance.3. This model is trained just like the `Sequential` model.The following example uses the functional API to build a simple, fully-connectednetwork:
###Code
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
###Output
_____no_output_____
###Markdown
Instantiate the model given inputs and outputs.
###Code
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Model subclassingBuild a fully-customizable model by subclassing `tf.keras.Model` and definingyour own forward pass. Create layers in the `__init__` method and set them asattributes of the class instance. Define the forward pass in the `call` method.Model subclassing is particularly useful when[eager execution](./eager.ipynb) is enabled since the forward passcan be written imperatively.Key Point: Use the right API for the job. While model subclassing offersflexibility, it comes at a cost of greater complexity and more opportunities foruser errors. If possible, prefer the functional API.The following example shows a subclassed `tf.keras.Model` using a custom forwardpass:
###Code
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
###Output
_____no_output_____
###Markdown
Instantiate the new model class:
###Code
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
Custom layersCreate a custom layer by subclassing `tf.keras.layers.Layer` and implementingthe following methods:* `build`: Create the weights of the layer. Add weights with the `add_weight` method.* `call`: Define the forward pass.* `compute_output_shape`: Specify how to compute the output shape of the layer given the input shape.* Optionally, a layer can be serialized by implementing the `get_config` method and the `from_config` class method.Here's an example of a custom layer that implements a `matmul` of an input witha kernel matrix:
###Code
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
###Output
_____no_output_____
###Markdown
Create a model using your custom layer:
###Code
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
###Output
_____no_output_____
###Markdown
CallbacksA callback is an object passed to a model to customize and extend its behaviorduring training. You can write your own custom callback, or use the built-in`tf.keras.callbacks` that include:* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at regular intervals.* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning rate.* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation performance has stopped improving.* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using [TensorBoard](https://tensorflow.org/tensorboard).To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
###Code
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
###Output
_____no_output_____
###Markdown
Save and restore Weights onlySave and load the weights of a model using `tf.keras.Model.save_weights`:
###Code
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
###Output
_____no_output_____
###Markdown
By default, this saves the model's weights in the[TensorFlow checkpoint](./checkpoints.md) file format. Weights canalso be saved to the Keras HDF5 format (the default for the multi-backendimplementation of Keras):
###Code
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
###Output
_____no_output_____
###Markdown
Configuration onlyA model's configuration can be saved—this serializes the model architecturewithout any weights. A saved configuration can recreate and initialize the samemodel, even without the code that defined the original model. Keras supportsJSON and YAML serialization formats:
###Code
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
###Output
_____no_output_____
###Markdown
Recreate the model (newly initialized) from the JSON:
###Code
fresh_model = tf.keras.models.model_from_json(json_string)
###Output
_____no_output_____
###Markdown
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
###Code
yaml_string = model.to_yaml()
print(yaml_string)
###Output
_____no_output_____
###Markdown
Recreate the model from the YAML:
###Code
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
###Output
_____no_output_____
###Markdown
Caution: Subclassed models are not serializable because their architecture isdefined by the Python code in the body of the `call` method. Entire modelThe entire model can be saved to a file that contains the weight values, themodel's configuration, and even the optimizer's configuration. This allows youto checkpoint a model and resume training later—from the exact samestate—without access to the original code.
###Code
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
###Output
_____no_output_____
###Markdown
Eager execution[Eager execution](./eager.ipynb) is an imperative programmingenvironment that evaluates operations immediately. This is not required forKeras, but is supported by `tf.keras` and useful for inspecting your program anddebugging.All of the `tf.keras` model-building APIs are compatible with eager execution.And while the `Sequential` and functional APIs can be used, eager executionespecially benefits *model subclassing* and building *custom layers*—the APIsthat require you to write the forward pass as code (instead of the APIs thatcreate models by assembling existing layers).See the [eager execution guide](./eager.ipynbbuild_a_model) forexamples of using Keras models with custom training loops and `tf.GradientTape`. Distribution EstimatorsThe [Estimators](./estimators.md) API is used for training modelsfor distributed environments. This targets industry use cases such asdistributed training on large datasets that can export a model for production.A `tf.keras.Model` can be trained with the `tf.estimator` API by converting themodel to an `tf.estimator.Estimator` object with`tf.keras.estimator.model_to_estimator`. See[Creating Estimators from Keras models](./estimators.mdcreating-estimators-from-keras-models).
###Code
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
###Output
_____no_output_____
###Markdown
Note: Enable [eager execution](./eager.ipynb) for debugging[Estimator input functions](./premade_estimators.mdcreate-input-functions)and inspecting data. Multiple GPUs`tf.keras` models can run on multiple GPUs using`tf.distribute.DistributionStrategy`. This API provides distributedtraining on multiple GPUs with almost no changes to existing code.Currently, `tf.distribute.MirroredStrategy` is the only supporteddistribution strategy. `MirroredStrategy` does in-graph replication withsynchronous training using all-reduce on a single machine. To use`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, thentrain the estimatorThe following example distributes a `tf.keras.Model` across multiple GPUs on asingle machine.First, define a simple model:
###Code
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
###Output
_____no_output_____
###Markdown
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` objectused to distribute the data across multiple devices—with each device processinga slice of the input batch.
###Code
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
###Output
_____no_output_____
###Markdown
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argumentto the `tf.distribute.MirroredStrategy` instance. When creating`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`argument. The default uses all available GPUs, like the following:
###Code
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
###Output
_____no_output_____
###Markdown
Convert the Keras model to a `tf.estimator.Estimator` instance:
###Code
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
###Output
_____no_output_____
###Markdown
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`arguments:
###Code
keras_estimator.train(input_fn=input_fn, steps=10)
###Output
_____no_output_____ |
1_ProgramHobbyStats.ipynb | ###Markdown
Many people agree that for people who work in programming, their interest in programming is important. Programming hobby may be relevant to some other features. But is that really the case?Before validating this view with data, let's familiarize ourselves with datasets and feature ProgramHobby.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ProgramHobbyStats as phs
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
df.head()
print("The total number of rows in data set is {}.".format(df.shape[0]))
# First, we need to look at the feature ProgramHobby to see if there is some missing values.
program_hobby_df = df[df['ProgramHobby'].isnull()==False]
print("The number of rows with invalid ProgramHobby is {}."
.format(sum(df['ProgramHobby'].isnull())))
###Output
The total number of rows in data set is 51392.
The number of rows with invalid ProgramHobby is 0.
###Markdown
Great, there are no missing values for this feature.
###Code
program_hobby_df = df[df['ProgramHobby'].isnull()==False]
program_hobby_df['ProgramHobby'].value_counts()
# We also need to know what the ratios of each value of this feature is present.
program_hobby_df['ProgramHobby'].value_counts()/program_hobby_df.shape[0]
# I want to see if most of the people on stackoverflow are professional developers.
program_hobby_df['Professional'].value_counts()
###Output
_____no_output_____
###Markdown
It seems that most of the users on the StackOverflow website are professional developers. This is true for at least the users involved in the survey.
###Code
dev_df = program_hobby_df[program_hobby_df['Professional']=='Professional developer']
dev_df['ProgramHobby'].value_counts()
# For professional developers, how many of them make programming a hobby?
dev_df['ProgramHobby'].value_counts()/dev_df.shape[0]
###Output
_____no_output_____
###Markdown
Among professional developers, most people are interested in programming. It's in line with my instincts. Still, about `20%` people don't think of programming as a hobby. At the same time, there are not too many developers willing to contribute to open source projects, only nearly `34% `.
###Code
# I want to get a deeper understanding of the relationship between Professional and ProgramHobby.
pro_hobby_df = phs.gen_pro_hobby_df(program_hobby_df)
pro_hobby_df.shape
# View the ratios of the various values in column hobby.
pro_hobby_df['hobby'].value_counts().sort_index()/pro_hobby_df.shape[0]
###Output
_____no_output_____
###Markdown
The proportion of the values in the `hobby` column here are consistent with the proportions in the data frame `program_hobby_df`. The people of programming enthusiasts make up about `75%`.
###Code
# View the ratios of the various values in column contrib.
pro_hobby_df['contrib'].value_counts().sort_index()/pro_hobby_df.shape[0]
###Output
_____no_output_____
###Markdown
The proportion of the values in the `contrib` column here are consistent with the proportion in the data frame `program_hobby_df`. The people who contribute to the open source project account for about `33%`.
###Code
# View the ratios of the various values in column both.
pro_hobby_df['both'].value_counts().sort_index()/pro_hobby_df.shape[0]
# View the of dataframe.
pro_hobby_df.head()
# Create a new dataframe to store ratio data.
pro_vals = pro_hobby_df['Professional'].value_counts().sort_index()
ratios_df = phs.gen_hobby_ratios_df(pro_vals, pro_hobby_df)
ratios_df
# Let's draw the graph.
x = np.arange(pro_vals.index.size)
labels = [phs.professional_map[str(v)] for v in pro_vals.index.values]
width = 0.3
# Draw a bar chart.
plt.bar(x=x, height=ratios_df['hobby'], width=width, color='yellow', label=u'hobby')
plt.bar(x=x+width, height=ratios_df['contrib'], width=width, color='red', label=u'contrib')
plt.bar(x=x+width*2, height=ratios_df['both'], width=width, color='green', label=u'both')
plt.xticks(x, labels, size='small', rotation=50, horizontalalignment='right')
plt.xlabel('Professional')
plt.ylabel('Hobby/Contribution ratio')
plt.title('Program Hobby with Professional')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
I found a very interesting point. From the sample as a whole, there are not many people who choose `None of these` in terms of Professional, only `914`. However, there are a lot of people in this segment who claim to be programming lover or/and to open source project contributor. I can't explain it right now. While we may be able to find out why in the other details of this dataset, since this is not the focus of my analysis, do not delve into it. I think we can eliminate this group of data in future analysis. Because it seems to me that this part of the data may skew our analysis.
###Code
# Draw another picture and
# ignore the portion of the data that corresponds to ‘None of these’.
pro_vals2 = pro_vals.drop('None of these')
ratios_df2 = ratios_df.drop('None of these')
x = np.arange(pro_vals2.index.size)
labels = [phs.professional_map[str(v)] for v in pro_vals2.index.values]
width = 0.3
# Draw a bar chart.
plt.bar(x=x, height=ratios_df2['hobby'], width=width, color='yellow', label=u'hobby')
plt.bar(x=x+width, height=ratios_df2['contrib'], width=width, color='red', label=u'contrib')
plt.bar(x=x+width*2, height=ratios_df2['both'], width=width, color='green', label=u'both')
plt.xticks(x, labels, size='small', rotation=60, horizontalalignment='right')
plt.xlabel('Professional')
plt.ylabel('Hobby/Contribution ratio')
plt.title('Program Hobby with Professional')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.